Next Article in Journal
Fully Bayesian Inference for Meta-Analytic Deconvolution Using Efron’s Log-Spline Prior
Next Article in Special Issue
Tensorized Multi-View Subspace Clustering via Tensor Nuclear Norm and Block Diagonal Representation
Previous Article in Journal
Classification of Four-Dimensional CR Submanifolds of the Homogenous Nearly Kähler S3×S3 Which Almost Complex Distribution Is Almost Product Orthogonal on Itself
Previous Article in Special Issue
Fine-Tuning Pre-Trained Large Language Models for Price Prediction on Network Freight Platforms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Harmony Search Algorithm for Distributed Permutation Flowshop Scheduling with Multimodal Optimization

1
School of Computer Science, Nanjing Audit University, Nanjing 211815, China
2
School of Software Engineering, Jinling Institute of Technology, Nanjing 211169, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(16), 2640; https://doi.org/10.3390/math13162640
Submission received: 28 June 2025 / Revised: 31 July 2025 / Accepted: 6 August 2025 / Published: 17 August 2025

Abstract

Distributed permutation flowshop scheduling is an NP-hard problem that has become a hot research topic in the fields of optimization and manufacturing in recent years. Multimodal optimization finds multiple global and local optimal solutions of a function. This study proposes a harmony search algorithm with iterative optimization operators to solve the NP-hard problem for multimodal optimization with the objective of makespan minimization. First, the initial solution set is constructed by using a distributed NEH operator. Second, after generating new candidate solutions, efficient iterative optimization operations are applied to optimize these solutions, and the worst solutions in the harmony memory (HM) are replaced. Finally, the solutions that satisfy multimodal optimization of the harmony memory are obtained when the stopping condition of the algorithm is met. The constructed algorithm is compared with three meta-heuristics: the iterative greedy meta-heuristic algorithm with a bounded search strategy, the improved Jaya algorithm, and the novel evolutionary algorithm, on 600 newly generated datasets. The results show that the proposed method outperforms the three compared algorithms and is applicable to solving distributed permutation flowshop scheduling problems in practice.

1. Introduction

Compared to the single workshop processing mode of the traditional manufacturing system, the distributed manufacturing system (DMS) fully utilizes resources in the workshops of distributed factories. By realizing the effective allocation of raw materials and an optimal combination of productivity, reasonable resource sharing, etc., the goal of quickly achieving product manufacturing at reasonable costs in the DMS is fulfilled.
Distributed scheduling plays a crucial role in the DMS; it has large-scale, nonlinear, strong constraint, multiobjective, and uncertainty characteristics and has always been a hot topic in the fields of optimization and manufacturing. Its scientific optimization directly affects the efficiency and long-term development of production enterprises. Therefore, developing efficient optimization scheduling algorithms is one of the keys to improving production efficiency, saving energy, reducing emissions, decreasing production costs, and solving the bottleneck problem.
In the DMS, the distributed flowshop scheduling problem (DFSP) is a very important problem. Due to the presence of multiple factories, the DFSP encounters many challenges, such as the coupling relationship between processing factories, allocation of machines within the factory, and sorting of jobs to be processed [1]. Compared with the traditional NP-hard flowshop scheduling problem in a single factory, the DFSP has a larger solution space and involves greater difficulty in arriving at the optimal solution. It imposes more stringent requirements on the accuracy and speed of the algorithm. Therefore, research on the problem has theoretical and applicative importance.
Many researchers have proposed various algorithms for solving the DFSP in different scenarios and under different constraints by focusing on different objectives of multimodal optimization. Li and Wu [2] proposed a heuristic for no-wait flowshops with makespan minimization based on total idle-time increments. Gogos [3] addressed the DFSP by using constraint programming and its special scheduling features for makespan minimization. Hao, Li, and Du et al. [4] studied the distributed hybrid flowshop scheduling (DHFS) problem for makespan minimization and established a mathematical model. Geng and Li [5] studied an improved hyperplane-assisted evolutionary algorithm to solve a distributed mixed flowshop scheduling problem in a glass manufacturing system with two objectives: to minimize the makespan and the total energy consumption. Zhang, Geng, and Li et al. [6] proposed an effective Q-learning-based multiobjective particle swarm optimization algorithm to solve the DFSP problem with the total completion time minimization and energy consumption. Bai, Liu, and Zhang et al. [7] studied a heterogeneous distributed permutation flowshop scheduling problem to minimize the makespan. Zhao, Zhuang, Wang, and Dong [8] investigated the distributed no-idle permutation flowshop scheduling problem (DNIPFSP). The makespan and total tardiness were optimized simultaneously, considering the scale and variety of problems by introducing an improved iterative greedy (IIG) algorithm. Li, Pan, and Sang et al. [9] addressed a distributed permutation flowshop scheduling problem with part of the jobs subject to a common deadline, established a mathematical model, and proposed a self-adaptive population-based iterated greedy algorithm with the objective of minimizing the total completion time. Song, Lin, and Chen [10] studied the distributed assembly permutation flowshop scheduling problem with sequence-dependent setup times. An effective two-stage heuristic was proposed with the optimization objective of minimizing the makespan.
The distributed permutation flowshop scheduling problem (DPFSP) is considered in this study as it is one of the most widely studied problems in the field of scheduling [11]. For example, Yu, Zhong, and Sun et al. [12] studied the energy-efficient distributed permutation flowshop scheduling problem with sequence-dependent setup times for the criterion of minimizing the total flowtime and total energy consumption.
The harmony search (HS) algorithm, proposed by Geem, Kim, and Loganathan [13], is a simple but effective meta-heuristic. It is based on the processes involved in musical performance when a composer searches for a better musical harmony, such as during jazz improvisation. Jazz improvisation seeks to find musically pleasing harmonies as determined by an aesthetic standard, just as the process of optimization seeks to find a global optimal solution as determined by an objective function. The pitch of each musical instrument determines the overall aesthetic quality, just as the value of the objective function is determined by the set of values assigned to each decision variable. The HS algorithm has been widely applied to various optimization problems in science and engineering [14]. In particular, it has also been deeply explored for power system optimization [15]. In recent years, Khaleel, Zaher, and Raeid [16] combined War Strategy Optimization technology with the harmony search algorithm to form a hybrid algorithm called H-WSO, which is used to improve the effectiveness of the War Strategy Optimization algorithm in optimization problems. Wang, Zhang, Wang, and Pan [17] proposed a three-way decision-based island harmony search algorithm for robust flowshop scheduling with uncertain processing times depicted by big data. Li, Xue, Li, and Shi [18] conducted research on the performance of the harmony search algorithm with local search algorithms to solve flexible job-shop scheduling problems.
As the DPFSP is one of the fastest-growing topics in the scheduling literature and HS is a simple but effective meta-heuristic which can be well used to solve this problem, applying HS to solve the DPFSP will be a challenging and promising solution. A harmony search algorithm with iterative optimizing operators is proposed in this paper to solve this problem for multimodal optimization with the objective of makespan minimization.
The remaining parts of this paper are organized as follows: Section 2 provides an objective of minimizing makespan and a mathematical model for the considered problem. Combined with iterative optimization algorithms, Section 3 presents a hybrid harmony search algorithm, named IOHS. Section 4 conducts simulation experiments based on experimental data to verify the effectiveness of the algorithm, and compares its performance with other algorithms. Section 5 provides conclusions and prospects for future research contents and directions.

2. Problem Description

The DPFSP can be described as follows: a batch of sequentially numbered jobs are first assigned to some distributed homogeneous factories, and then the jobs in each factory are scheduled and processed sequentially on the machines in that factory. The objective is to determine a near-optimal job–factory allocation and intra-factory job sequencing for multimodal optimization with makespan minimization.
The main assumptions are as follows:
  • All jobs are ready when processing starts.
  • The number of jobs and their processing times on machines are known, and are non-negative.
  • Each job can be processed only on one machine in a given factory at a given time, and cannot be pre-empted.
  • Each machine can process only one job at a time, and completes all jobs in sequence.
  • The preparation time for each job is sequence independent, and is included in its processing time.
In Figure 1, there are nine sequentially numbered jobs assigned to two distributed manufacturing factories. It shows that there are four jobs scheduled in sequence {2,8,5,6} in the first factory, and five jobs scheduled in sequence {1,4,3,7,9} in the second one. While each factory has its own makespan, the maximum of these makespans is regarded as the makespan of the overall schedule. Thus, the makespan of this schedule as shown in Figure 1 is the completion time of job 9 on machine 3 in factory 2.
The symbols and definitions used in this paper are as follows:
F: Number of factories.
f: Index for factories, f = {1, 2, …, F}.
M: Number of machines in each factory.
m: Index for machines, m = {1, 2, …, M}.
Jf: Number of jobs in factory f.
J: Total number of jobs, J = J1 + J2 + … + JF.
jf,n: The n-th job assigned to factory f, n = {1, 2, …, Jf}.
pf,n,m: The processing time of the n-th job in factory f on machine m.
A solution π can then be given by Equation (1):
π = J 1 , J 2 , , J F , j 1,1 , j 1,2 , , j 1 , J 1 , j 2,1 , j 2,2 , , j 2 , J 2 , , j F , 1 , j F , 2 , , j F , J F
Equation (1) shows that π is composed of two parts: {J1, J2, …, JF} means the number of jobs assigned to each factory, while j i , 1 , j i , 2 , , j i , J i indicates the sequence of jobs processed in the ith factory.
The formulae for calculating the completion time of each job on the machines in the factory are given by Equations (2)–(5):
C ( j f , 1 , 1 ) = p f , 1,1
C ( j f , 1 , m ) = k = 1 m p f , 1 , k
C ( j f , n , 1 ) = t = 1 n p f , t , 1
C ( j f , n , m ) = max C ( j f , n , m 1 , C ( j f , n 1 , m ) } + p f , n , m
In Equation (2), C(jf,1, 1) is the completion time of the first job jf,1 on the first machine in each factory. pf,1,1 is the processing time for this job on the given machine. In Equation (3), C(jf,1, m) is the completion time of the first job jf,1 on the m-th machine in each factory. pf,1,k is the processing time of the first job on the k-th machine in factory f. In Equation (4), C(jf,n, 1) is the completion time of the n-th job jf,n on the first machine in each factory. pf,t,1 is the processing time of the t-th job on the first machine in factory f. In Equation (5), the completion time of the n-th job jf,n on the m-th machine in factory f, C(jf,n, m) is the sum of pf,n,m and the maximum value of C(jf,n, m − 1) and C(jf,n−1, m):
C f = C j f , J f , M
C π = max C 1 , C 2 , , C F
The completion time of factory f is the completion time of the last job j f , J f on the last machine M, as shown in Equation (6). The makespan for collaborative processing in DMS is the maximum of C(1), C(2), …, C(F), as shown in Equation (7).
The solution π * in the solution space Π that has the minimum makespan can be depicted by Equation (8):
C π * = min π Π C π
The aim of this paper is to construct an algorithm to find multiple global and local optima of a function, which is defined as multimode optimization in [19]. In this way, the user can gain better knowledge about different optimal solutions in the search space, and choose the appropriate optimal solution according to their own needs to meet system performance requirements.

3. Proposed Algorithm

3.1. Harmony Search Algorithm

The basic process of HS is shown in Algorithm 1.
Algorithm 1. Harmony search algorithm (HS).
01. Initialize parameters rh, rp, sh, bw;
02. For (i =1 to sh){
03.   Select values within the range of the decision variable to generate a harmony solution;
04.   Put the solution into the harmony memory HM;
05. }
06. Repeat
07.    Set   π n e w ϕ ;
08.  For (i =1 to n){
09.     Generate a random number r1;
10.      If   ( r 1 < r h ){
11.         Select   a   value   as   the   i th   decision   variable   of   π n e w from the historical solution of HM;
12.        Generate a random number r2;
13.         If   ( r 2 < r p )
14.          Adjust this decision variable according to the adjustment bandwidth bw to obtain a new decision variable;
15.     }
16.     Else{
17.          Select   a   value   as   the   i th   decision   variable   of   π n e w within the range of values of the decision variable;
18.     }
19.   }
20.    According   to   the   objective   function ,   find   the   worst   solution   π w o r s t in HM;
21.    If   ( π n e w   is   better   than   π w o r s t )      
22 .       Replace   π w o r s t   with   π n e w ;
23. Until (the stopping condition is satisfied);
24. Return;
In the HS algorithm, the rate of consideration of HM, rh, is the probability of taking a value from HM. The rate of pitch adjustment, rp, is the probability of adjusting a value. sh is the size of HM, and bw is the bandwidth of adjustment. r1 and r2 are two random numbers in range (0, 1). The algorithm consists of three main process: (1) harmony memory initialization, (2) new solution generation, and (3) harmony memory update.

3.1.1. Harmony Memory Initialization

The initialization of HM, which is the first stage of this evolutionary computing algorithm, is used to generate an initial solution set. The initial solution set, containing sh harmony solutions, is randomly generated from the solution space with n variables, and is placed in HM. The form of HM is given in Equation (9).
H M = π 1 π s h = π 1 1 π 1 n π s h 1 π s h n

3.1.2. New Solution Generation

A two-stage process, involving construction and adjustment, is used to generate a new solution. In the construction stage, a random number r1 is generated and compared with rh. If r1 is smaller than rh, a harmony variable is selected from HM; otherwise, a harmony variable is generated from the solution space. For each harmony variable obtained from HM, another random number r2 is generated in the adjustment stage and compared with rp. If r2 is smaller than rp, the variable needs to be adjusted according to the adjustment bandwidth bw to obtain a new value. Otherwise, no adjustments will be made. By performing this process n times, a new harmony π n e w is obtained.

3.1.3. Harmony Memory Update

The generated new solution π n e w is evaluated based on the optimization objective function. If   π n e w is better than the worst solution π w o r s t in HM, π w o r s t is replaced by π n e w . Otherwise, no update is needed.

3.2. Harmony Search with Iterative Optimization

The HS algorithm was originally developed for continuous functions of n variables, and it has the essence of continuity [13]. However, the discussed problem in this paper involves decision variables with discrete characteristics. Hence, a hybrid HS algorithm, based on the classical HS algorithm, is proposed to solve the considered problem.

3.2.1. Initialization of HM

Due to the homogeneity of DMS, all factories have the same number and sequence of machines with identical processing characteristics; there is no need to optimize the order of factories, so the factories can be arranged in numerical order. Each solution in HM consists of two parts. The first part describes the number of jobs allocated by each factory in order. The second part shows the sequence of the jobs in a given factory. Assigning different numbers of jobs to a factory and the resulting schedule can significantly influence the makespan of that factory. Therefore, the two factors, job–factory allocation and job sequencing in a factory, have an impact on the objectives of the scheduling. These factors cannot be simply separated, and need to be studied as a whole.
Assume that there are nine sequentially numbered jobs assigned to three distributed manufacturing factories. Table 1 shows an example of the HM. It contains two harmony solutions represented as π 1 and π 2 , respectively. Considering the solution π 2   = {{4,2,3},{{1,3,4,9},{2,5},{6,7,8}}}, {4,2,3} indicates that there are three processing factories, and there are four jobs {1,3,4,9} in the first factory, two jobs {2,5} in the second one, and three jobs {6,7,8} in the third one.
The quality of the initial solution set is very important for the evolution of the algorithm, because it determines whether the algorithm can quickly find the near-optimal solution or not. The NEH algorithm [20] is an efficient and widely used heuristic algorithm used to obtain an initial solution for the scheduling problems with makespan minimization. Its main idea is as follows: First, arrange the n jobs in descending order according to their processing times. Second, pick the first two jobs from the list, and choose the best sequence of the two jobs by calculating makespan of the two possible sequences, and set i = 2. Third, pick the job in the (i + 1)-th position of the list, and find the best sequence by placing it at all possible (i + 1) positions in the partial sequence found previously. Repeat the third step till i = n.
However, the problem considered here is a distributed permutation flowshop scheduling problem with multiple factories. The NEH algorithm cannot be directly used. Thus, a distributed NEH algorithm (D-NEH) is constructed to obtain an initial solution set. The D-NEH can be described as follows: First, generate a job sequence by arranging the J jobs in descending order according to their processing times on the machines, and randomly generate the other (sh − 1) job sequences. Then, for each of the sh job sequences, assign the first F jobs to F factories (each factory contains one job), pick the next job from the job sequence, and find the best position by placing it in all possible positions in the partial sequence of each factory, and repeat the pick–find operation until all the jobs have been arranged to its proper factory and the best position in that factory. After the new job sequence π i is generated, input it into HM. Finally, the initial solution set HM is obtained.
The description of the D-NEH is shown in Algorithm 2. Assuming that sh is an independent variable, the time cost of this algorithm mainly occurs between pseudocode lines 4 and 16, so the algorithm’s overall time complexity is O(shJ2F) according to these pseudocodes.
Algorithm 2. D-NEH algorithm.
01. Initialize the parameter sh;
02. Generate   the   first   job   sequence   π 1 by arranging the J jobs in descending order according to their processing times on the machines;
03. Randomly generate the other (sh − 1) job sequences;
04. For (i = 1 to sh){
05. π i   Ø ;
06.    Assign   the   first   F   jobs   to   F   factories   one   by   one   and   get   the   partial   job   sequence   π i ;
07.   For (j=F; j<J; j++){
08.      Pick   the   job   in   the   ( j + 1 ) - th   position   of   the   job   sequence   π i ;
09.     For (f=1; f<=F; f++){
10.       For (k=1; k<=J; k++){
11.         Place the (j+1)-th job into the k-th possible position of the partial sequence for factory f;
12.       }
13.       Record the best partial job sequence in factory f;
14.     }
15.      Pick   the   partial   job   sequence   with   minimum   makespan   as   the   current   job   sequence   π i   }
17.    Put   the   job   sequence   π i into HM;
18. }
19. Return;

3.2.2. Generation of New Solution

The process of generating new solutions includes three stages. The first two stages are identical to those in the HS algorithm, while the last one is an optimization stage. Unlike the case in which there is one solution sequence for only one factory, a DMS in the construction stage has at least two factories, each of which may not necessarily contain the same number of jobs. When constructing a new solution sequence, the number t ( 0 < t s h ) of different solution structures contained in HM should first be obtained. According to the number of jobs in each factory, if the random number r 1 ( 0 < r 1 < 1 ) is smaller than the parameter rh, a new solution is constructed by selecting jobs from HM. Otherwise, the solution is constructed by selecting jobs from a range of possible values for each decision variable. It is clear that, when constructing the new solution, t solutions are generated instead of one. The construction stage essentially combines the architecture of certain meta-heuristic algorithms. For example, it preserves the historical traces of past vectors, similarly to the taboo search algorithm [21]. It can have a varying probability of fitness from the beginning to the end of the calculation, similarly to the simulated annealing algorithm [22]. At the same time, it can retain several vectors like genetic algorithms to enable the newly generated solution to evolve.
The adjustment stage in this process involves a certain variation based on the parameter rp, while the solution is obtained by considering convergence and divergence in the generation of the new solution by the algorithm. That is, the algorithm constructs the solution while performing mutation on each decision variable by relying on rp. The main idea of this adjustment operation is that when it is executed, a decision variable is selected from the current solution sequence, and exchanged with a randomly selected decision variable that is close to it. The solution with the optimal value of the objective function is used as the new solution. This idea means to create a small random disturbance to avoid premature convergence.
In the optimization stage, an iterative optimization algorithm (IOA), which is composed of RZ and PE operators [2], is used to further search the neighborhood of the candidate solution to find more and better solutions. RZ and PE algorithms are commonly used and are relatively efficient among heuristic algorithms. The main idea of the RZ algorithm is as follows: For a given job sequence with n jobs, select a job from the sequence, and insert it into n possible positions to find the best sequence. Repeat this select–insert operation for the next job in the job sequence until all of the jobs are selected/optimized. The main idea of the PE algorithm is as follows: For a given job sequence with n jobs, pick a job from it and exchange this job with other n-1 jobs to find the best sequence. Repeat this pairwise exchange operation for the next unpicked job until all of the jobs are picked/optimized. The time complexities for the two algorithms are both O (mn2), where m is the number of machines.
Based on the above description, the IOA can be depicted as follows: For each new candidate solution π i ,   i = 1,2 , , t , select one job from the sequence with the largest makespan value (if there is more than one sequence, select one randomly), and insert it into other possible positions to find the best sequence π b e s t . For π b e s t , pick this job and exchange it with other jobs to find the best sequence π b e s t . Repeat the select–insert and pairwise-exchange operations for the next selectable job until all the jobs have been optimized. Thus, the near-optimal solution has been finally obtained. Algorithm 3 shows the IOA in detail. The time cost of IOA mainly lies between line 2 and line 19 in the pseudocode, with a time complexity of O(J2F). Thus, for each new candidate solution π i ,   i = 1,2 , , t , the overall time complexity of the algorithm is O(J2F).
Algorithm 3. Iterative optimization algorithm (IOA).
01. Set   π b e s t   π i   ( i = 1,2 , . . . , t )   ;
02. Repeat
03.    Find   the   factory   with   the   largest   value   of   makespan   in   π b e s t and record it as fmax;
04.   Set global ← false;
05.    For   ( j = 1   to   J f m a x ){
06.     For (f =1 to F){
07.       If (f = fmax) continue;
08.       Insert job j into factory f;
09.        Apply   the   select insert   operation   to   optimize   the   job   sequence   of   factory   f and   get   the   best   sequence   π b e s t ;
10.        Apply   the   pairwise   exchange   operation   to   further   optimize   the   job   sequence   of   factory   f and   get   the   best   solution   π b e s t ;
11.        If ( C π b e s t   <   C ( π b e s t ) ){
12.          Replace   π b e s t   with   π b e s t ;
13.         Set global ← true;
14.       }
15.       If (global=true) break;
16.       }
17.     If (global=true) break;
18.   }
19. Until global=false;
20. Set   π i   π b e s t ;
21. Return;
After the construction and adjustment stages as well as the optimization of the candidate solution, t new candidate solutions π n e w = { π 1 , π 2 , , π t } are generated and sorted in descending order according to the value of the objective function to facilitate the subsequent update of HM.
The proposed NSGA to generate and optimize new solutions is described in Algorithm 4. The time cost of NSGA mainly lies between line 4 and line 20 in the pseudocode with a time complexity of O(tJ2F), where t is the number of solution structures. As the maximum value of t is sh. Thus, the worst-case time complexity of the algorithm is O(shJ2F).
Algorithm 4. New solution generation algorithm (NSGA).
01. Initialize parameters rh, rp, bw;
02. Set π n e w ϕ ;
03. Obtain the total number of solution structures t of HM;
04. For (i =1 to t){
05.    Let   the   i th   solution   structure   be   the   structure   of   the   new   solution   π i ;
06.    Set   π i ϕ ;
07.   For (j=1 to J){
08.     Generate two random numbers r1 and r2;
09.      If   ( r 1 < r h ){
10.        Select   a   new   job   from   column   j   in   HM   and   insert   it   into   the   j - th   position   of   the   new   solution   π i ;
11.     }Else{
12.        Select   a   new   job   from   the   job   set   and   insert   it   into   the   j - th   position   of   the   new   solution   π i ;
13.     }
14.      If   ( r 2 < r p ){
15.        Adjust   the   job   in   π i to within the range (max{0, j-bw}, min{ j+bw, J});
16.     }
17.   }
18.    Applying   the   IOA   operator   to   optimize   the   new   solution   π i ;
19.    Set   π n e w π n e w π i ;
20. }
21. Sort the t new candidate solutions in descending order according to values of the objective function, and obtain the new solution set π n e w = { π 1 , π 2 , , π t };
22. Return;
In the NSGA, the IOA is used to further optimize the candidate solutions for better ones. This not only increases the diversity of solutions in HM, but also enables the algorithm to expand the search space, and find more and better solutions. Therefore, this algorithm has a good ability to search the solution space Π and find more near-optimal solutions while maintaining their diversity.

3.2.3. Update of HM

The update operation is used to obtain more and better solutions, which can guide the proposed algorithm to search for a better solution space and accelerate the convergence of the solution. Therefore, the update operation plays an important role in maintaining high-quality solutions and ensuring the convergence of the algorithm.
According to the objective function, the main idea of the update operation is as follows: For each new generated harmony solution π i ( i = 1,2 , , t ) and the worst harmony solution π h ( h = 1,2 , , s h ) found in HM, if π h is worse than π i , replace π h with π i .
The update of HM is detailed in Algorithm 5. It is obvious that the time complexity of this algorithm is O(sh2).
Algorithm 5. Update algorithm.
01. For (i =1 to t) {
02.   Set pos ← −1; ms ← −1;
03.   For (h =1 to sh){
04.      If   ( m s < C ( π h ) ) {
05.       Set pos   h , m s π h ;
06.     }
07.   }
08.    If   ( p o s 1   and   m s > C ( π i ) ) {
09.      Replace   π p o s   with   π i ;
10.   }
11. }
12. Return;

3.2.4. The Proposed Algorithm

The algorithm proposed in this paper, called IOHS, is a hybrid meta-heuristic based on HS and iterative optimization. After the D-NEH algorithm obtains the initial solution set, the NSGA algorithm is used to construct new solutions. While constructing a new solution, it may select a new job from HM or a job set based on the parameter rh, and adjusts it to avoid local convergence based on the parameter rp. The IOA algorithm is used to further optimize the quality of the new solution and obtain the final candidate solution. The worst solutions in HM are replaced if they are worse than the candidate solutions. The construction, optimization, and update procedures are repeated until the algorithm satisfies its stopping condition. Finally, the solutions that satisfy the multimodal optimization are output.
The IOHS algorithm is described in detail in Algorithm 6.
Algorithm 6. IOHS algorithm.
01. Generate the initial solution set of HM by using D-NEH algorithm;
02. Repeat
03.    Construct   a   new   solution   set   π n e w = { π 1 , π 2 , , π t } by using the NSGA algorithm;
04.   Use the Update algorithm to update HM;
05. Until (the stopping condition is satisfied)
06. Calculate the objective function values based on Equation (7);
07. Output the solutions satisfying multimodal optimization based on Equation (8);
08. Return;

4. Simulations

In this section, the test dataset used to assess the performance of the proposed method is described. Then, Statgraphics is used to analyze the parameters rh and rp, and, finally, the simulations are detailed.

4.1. Test Dataset

The Taillard benchmark [23] has been widely used in scenarios involving single and multiple manufacturing factories. However, when the number of factories increases in the application of a distributed manufacturing system, the average number of jobs assigned to each factory may be very small. For example, if there exists a DMS consisting of 10 factories and 200 jobs, the average job allocation for each factory is only 10. Although the solution space is huge, the size of the problem for each factory is relatively small. Therefore, to faithfully simulate job allocation in distributed scenarios, a new dataset is generated which is based on the main idea of the data generation algorithm developed by Taillard.
The stopping condition of the algorithm is described in the following Section 4.2. The generated dataset is as follows: The processing time of each job on M machines is randomly generated in the interval [5, 99]. The numbers of jobs J are set to 60, 150, 330, 510, and 600. The numbers of factories F are set to 2, 3, 5, and 10, respectively. The numbers of machines M in each factory are set to 5, 10, and 20, respectively.

4.2. Parameter Analysis

The parameter sh represents the size of HM, i.e., the number of solutions in HM. Based on the total number of jobs, sh is set to 0.2 × J. The parameter bw is used to increase the range of the search space, prevent the algorithm from prematurely converging, and improve the probability of finding a better solution. In this paper, bw is set as bw = J × 0.05. To evaluate the algorithm fairly based on the number of machines and the average number of jobs per factory, the stopping condition for these algorithms is T = M × ( J ÷ F ) × t × 0.5 ms [24], where M is the number of machines in each factory, J is the total number of jobs to be processed, F is the number of factories, and t is set to 120.
The rates of consideration rh and pitch adjustment rp are two important parameters in HS algorithm. rh determines the probability of randomly generating a solution based on the historical variables in HM. As HM stores the best solutions obtained by the algorithm at any given time, the algorithm tends to find more optimized solutions (leading to faster convergence) when the value of rh increases. rp determines the probability of adjusting the values selected from among the historical values in HM, which means that it determines the probability of adjusting some vectors of a solution (probability of divergence). As the value of rp increases, the algorithm tends to generate solutions by selecting from global variables, that is, by expanding the search range (divergence) to find more optimized solutions. To determine appropriate values of rh and rp, they are tested with different values: rh is set to 0.70, 0.75, 0.80, 0.85, 0.90, and 0.95, while rp is set to 0.10, 0.15, 0.20, 0.25, 0.30, and 0.35. In order to test the parameter values fairly and quickly, M is set to 10, J to 330, and F to 5. For each parametric combination, a total of 20 instances are generated, each of which is executed three times, and the minimum value of the objective function is obtained. The average makespan values (MSavg) and standard deviation (SD) of each parameter combination are shown in Table 2.
Seen in Table 2, when rh = 0.95 and rp = 0.10, the average value of MS is 4601, which is the best among these makespan values, and the standard deviation is 31.54, which means that the algorithm searches a wider and better solution space by using these two parameters. Further, different parameter combinations, as listed in Table 2, may have a significant impact on the average makespan values, and the analysis results are shown in Figure 2, Figure 3 and Figure 4.
Figure 2 shows the results of a multivariate analysis of variance. As the value of rh increased and that of rp decreased, the solution obtained by the algorithm was improved. This indicates that the larger the value of rh, and the smaller the value of rp, the better the performance of the algorithm. It is clear from Figure 3 that when rh = 0.95 and rp = 0.10, the algorithm can obtain the best solution.
Figure 3 shows the results of a one-way analysis of variance for rh. Clearly, as the value of rh increases, the solution obtained by the algorithm is improved. When rh is 0.95, the algorithm can achieve the best solution.
Figure 4 shows the results of a one-way analysis of variance for rp, from which it can be seen that as the value of rp decreases, the solution obtained by the algorithm is improved. When rp is 0.10, the algorithm can obtain the best solution.
From the above analysis, it can be concluded that setting rh = 0.95 and rp = 0.10 can enable IOHS algorithm to obtain the optimal makespan.

4.3. Experimental Verification

4.3.1. Comparison of HS and IOHS

To determine whether the optimization method (IOA) can enhance the performance of IOHS, this study compares the HS and IOHS algorithms from three perspectives: job, factory, and machine. The average makespan values and standard deviation obtained by HS and IOHS algorithms are shown in Table 3.
From Table 3, it can be seen that the IOHS algorithm can obtain more and better makespan values than the HS algorithm, and the standard deviation of the IOHS algorithm is greater than that of the HS algorithm, which means that the IOHS algorithm found better solutions in a wider and better solution space. Further, the analysis results are shown in Figure 5, Figure 6 and Figure 7.
It can be seen from Figure 5 that as the number of jobs increases, the problem becomes more complex, and the makespan values obtained by the algorithms increase. However, under the same number of jobs, the IOHS algorithm outperforms the HS algorithm.
Figure 6 shows that, as the number of factories increases, the number of jobs in each factory becomes smaller, and the resulting makespan values obtained by the algorithms decrease. Under the same number of factories, the IOHS algorithm outperforms the HS algorithm.
It can be seen from Figure 7 that, as the number of machines increases, the makespan values obtained by the algorithms increase. Under the same number of machines, the IOHS algorithm outperforms the HS algorithm.
Based on the above analysis, it can be concluded that the IOHS algorithm has better results than the HS algorithm in both individual parameter and parameter combinations conditions. In other words, the optimization method IOA is effective in improving the performance of the IOHS algorithm.

4.3.2. Algorithms Comparison

Many effective meta-heuristic algorithms have been developed in research, including PSO [25], DE [26], and CMA-ES [27], that can be compared with the proposed IOHS algorithm in the context of the DPFSP. However, only the iterative greedy meta-heuristic algorithm with a bounded search strategy (BSIG) [28], the improved Jaya algorithm (Jaya) [29], and the novel evolutionary algorithm [30] are suitable for solving the problem considered here. They are implemented along with the proposed algorithm on a new dataset, with the aim of multimodal optimization. The same stopping condition is used by the three algorithms, and is described in Section 4.2. The average values of makespan and standard deviation obtained by these algorithms are shown in Table 4.
Table 4 shows that, for most instances, the IOHS algorithm can obtain the best results. Although the value of its standard deviation is close to that of other algorithms, it can search a better solution space. The average relative percentage deviation (ARPD) is used for further analysis of these algorithms; it is defined in Equation (10).
A R P D = i = 1 N F i ( H ) B i 1 N × 100 %
where N is the number of instances of the same size, Fi(H) is the makespan value obtained by solving instance i by using algorithm H, and Bi is the minimum value of makespan obtained by solving instance i by using all the other algorithms. The larger the value of the ARPD, the worse the average performance of the algorithm. Table 5 lists the values of ARPD obtained by the BSIG, Jaya, and IOHS algorithms.
Table 5 shows that the BSIG algorithm has the worst performance, and the IOHS algorithm is slightly better than the NEA algorithm. Due to the fact that the data in Table 5 is deduced from Table 4 using Equation (10), which does not have the characteristics of a normal distribution, a Wilcoxon test is conducted on the data in Table 5. The results of the 2-related sample tests for BSIG and IOHS, Jaya and IOHS, and NEA and IOHS, are summarized and shown in Table 6.
From the hypothesis test results in Table 6, it can be seen that the IOHS algorithm is significantly better than the three compared algorithms. This is consistent with the statistical results in Table 4 and Table 5.
To facilitate comparison and visually display the trends of the solutions to the DPFSP obtained by different algorithms at different scales, the results of the multifactor ANOVA are shown in Figure 8, Figure 9 and Figure 10.
Figure 8 shows that, as the number of jobs increase, the ARPD values obtained by BSIG slightly decrease, while the ARPD values obtained by NEA algorithm increase. It indicates that, as the number of jobs increases, the BSIG algorithm can expand its search range and find better solutions, while the NEA algorithm becomes worse. As the number of jobs increases, the ARPD values obtained by the Jaya and IOHS algorithms have no significant changes. It is obvious that, regardless of the number of jobs, the IOHS algorithm gives the best performance of the three algorithms.
Figure 9 shows that, as the number of factories increases, the ARPD values obtained by the IOHS and Jaya algorithms both show a downward trend, while those obtained by the BSIG and NEH algorithms fluctuate. That is, the IOHS and Jaya algorithms are still able to find better solutions when the number of factories increases. It is clear that the IOHS algorithm shows the best performance of the three algorithms under this assumption.
Figure 10 shows that, as the number of machines increases, the ARPD values obtained by the BSIG, Jaya, and IOHS algorithms exhibit a decreasing trend, while the ARPD values obtained by the NEA algorithm fluctuate. It means that the BSIG, Jaya, and IOHS algorithms can find better solutions, while the NEA algorithm is uncertain. Furthermore, the IOHS algorithm performs the best among the three algorithms.
Through the above analysis results, the following conclusion can be made: Although the NEA algorithm is superior to the BSIG algorithm, the performance of the BSIG and NEA algorithms is not as good as the Jaya and IOHS algorithms. Whether analyzed from the perspective of jobs, factories, or machines, the ARPD values obtained by the IOHS algorithm are superior to the three algorithms, indicating that IOHS provides the best and most stable performance in solving the DPFSPs at all scales.

4.4. Discussion

Compared with the three meta-heuristics, the proposed IOHS algorithm is an efficient and effective method. Its advantages are manifested in the following aspects:
  • By using the distributed NEH algorithm D-NEH to generate the initial solution set, it can improve the quality of the initial solutions, thereby accelerating the algorithm’s convergence speed.
  • The proposed iterative optimization algorithm IOA can further optimize the candidate solutions through selection–insertion and pairwise exchange operations, which significantly enhance the IOHS algorithm’s search capability and solution quality.
  • It dynamically adjusts its search strategy through parameters (e.g., rh and rp), balancing the guidance of historical optimal solutions (convergence) with random adjustments (divergence) to avoid local optima. This ensures a more comprehensive exploration of the solution space.
  • The constructed algorithm can find high-quality solutions within a reasonable time through optimization operations and dynamic updates to harmony memory, which make it applicable to large-scale scheduling problems.
  • The IOHS algorithm demonstrated more stable and superior performance compared to the BSIG, Jaya, and NEA algorithms on 600 newly generated datasets. Its lower average relative percentage deviation indicates higher accuracy and robustness in solving DPFSP problems.
As the IOHS algorithm performs well in different scales of DPFSP problems (such as different numbers of factories, machines, and jobs), the conclusion can be made that the IOHS algorithm is highly applicable and effective in solving complex distributed permutation flowshop scheduling problems.
In the meantime, its disadvantages are also obvious:
  • As it is a meta-heuristic, its computational overhead is significant. Compared with heuristic algorithms, it is not suitable for scenarios requiring real-time scheduling.
  • The algorithm’s performance heavily depends on key parameters (e.g., rh, rp, and bw), which require experimental tuning. Improper parameter settings may lead to premature convergence (trapping in local optima) or inefficient exploration.
  • Compared with more intelligent optimization algorithms such as Q-learning, this meta-heuristic still has high randomness in the solution space search process.
Regarding its disadvantages, future improvements could involve dynamic parameter adjustment, hybridizing with other meta-heuristic search strategies, parallel computing, or combining with artificial intelligence technologies.

5. Conclusions

This study discussed the distributed permutation flowshop scheduling in detail, and proposed a hybrid harmony search algorithm IOHS for multimodal optimization. The proposed algorithm initializes HM with a distributed NEH algorithm, continuously constructs and adjusts new solutions using the NSGA algorithm to obtain better solutions, and updates HM until the stopping condition is met. The constructed algorithm was analyzed by different parameter combinations, and was compared with three classic or recently developed algorithms on a large number of test datasets. The results show that IOHS is applicable and effective in solving complex distributed permutation flowshop scheduling problems.
The development and empirical evaluations of the algorithm reported in this paper also reveal some fruitful directions for future research in the scheduling field. First, researchers should seek to enhance the capability of the proposed algorithm to search large-scale problem spaces. Second, as novel kinds of bionic algorithms, such as PSO, DE, and CMA-ES, are developed, devising effective and efficient algorithms for the DPFSP should persist as the major aim of research in the area. Third, the performance of the algorithms considered here should be compared by considering other criteria than those provided here, particularly criteria involving different production environments with varying purposes, on various DMS-related scheduling problems. The DPFSP involving more realistic issues, such as no wait, parallel batching, set-up, and transmission times is a more complex problem which is commonly encountered in production environments. Therefore, assessing the performance of the relevant methods on such scheduling problems is important in future research. Finally, researchers should consider a number of problems in different production environments, including the flowshop, open-shop, and job-shop. This is a promising opportunity to explore the development and application of scheduling theory in DMS and services.

Author Contributions

Validation, Y.C.; writing—original draft, Y.L.; writing—review and editing, H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by National Key R&D Program of China (grant no. 2024YFC3307901).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank the Jiangsu Key Laboratory of audit information engineering for their support and anyone who supported the publication of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zheng, J.; Wang, L.; Wang, J.J. A cooperative coevolution algorithm for multi-objective fuzzy distributed hybrid flowshop. Knowl. -Based Syst. 2020, 194, 105536. [Google Scholar] [CrossRef]
  2. Li, X.P.; Wu, C. Heuristic for no-wait flow shops with makespan minimization based on total idle-time increments. Sci. China Ser. F Inf. Sci. 2008, 51, 896–909. [Google Scholar] [CrossRef]
  3. Gogos, C. Solving the distributed permutation flow-shop scheduling problem using constrained programming. Appl. Sci. 2023, 13, 12562. [Google Scholar] [CrossRef]
  4. Hao, J.H.; Li, J.Q.; Du, Y.; Song, M.X.; Duan, P.; Zhang, Y.Y. Solving distributed hybrid flowshop scheduling problems by a hybrid brainstorm optimization algorithm. IEEE Access 2019, 7, 66879–66894. [Google Scholar] [CrossRef]
  5. Geng, Y.; Li, J. An improved hyperplane assisted multiobjective optimization for distributed hybrid flowshop scheduling problem in glass manufacturing systems. Comput. Model. Eng. Sci. 2023, 134, 241–266. [Google Scholar]
  6. Zhang, W.; Geng, H.; Li, C.; Gen, M.; Zhang, G.; Deng, M. Q-learning-based multi-objective particle swarm optimization with local search within factories for energy-efficient distributed flow-shop scheduling problem. J. Intell. Manuf. 2025, 36, 185–208. [Google Scholar] [CrossRef]
  7. Bai, D.; Liu, T.; Zhang, Y.; Chu, F.; Qin, H.; Gao, L.; Su, Y.; Huang, M. Scheduling a Distributed Permutation Flowshop With Uniform Machines and Release Dates. IEEE Trans. Autom. Sci. Eng. 2024, 22, 215–227. [Google Scholar] [CrossRef]
  8. Zhao, F.; Zhuang, C.; Wang, L.; Dong, C. An Iterative Greedy Algorithm with Q-Learning Mechanism for the Multiobjective Distributed No-Idle Permutation Flowshop Scheduling. IEEE Trans. Autom. Sci. Eng. 2024, 54, 3207–3219. [Google Scholar] [CrossRef]
  9. Li, Q.Y.; Pan, Q.K.; Sang, H.Y.; Jing, X.L.; Framiñán, J.M.; Li, W.M. Self-Adaptive Population-Based Iterated Greedy Algorithm for Distributed Permutation Flowshop Scheduling Problem with Part of Jobs Subject to a Common Deadline Constraint. Expert Syst. Appl. 2024, 248, 123278. [Google Scholar] [CrossRef]
  10. Song, H.B.; Lin, J.; Chen, Y.R. An effective two-stage heuristic for scheduling the distributed assembly flowshops with sequence dependent setup times. Comput. Oper. Res. 2024, 173, 106850. [Google Scholar] [CrossRef]
  11. Perez-Gonzalez, P.; Framiñán, J.M. A review and classification on distributed permutation flowshop scheduling problems. Eur. J. Oper. Res. 2024, 312, 1–21. [Google Scholar] [CrossRef]
  12. Yu, Y.; Zhong, Q.; Sun, L.; Han, Y.; Zhang, Q.; Jing, X.; Wang, Z. A Self-adaptive two stage iterative greedy algorithm based job scales for energy-efficient distributed permutation flowshop scheduling problem. Swarm Evol. Comput. 2025, 92, 101777. [Google Scholar] [CrossRef]
  13. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  14. Zhang, T.H.; Geem, Z.W. Review of harmony search with respect to algorithm structure. Swarm Evol. Comput. 2019, 48, 31–43. [Google Scholar] [CrossRef]
  15. Oh, E.; Geem, Z.W. Exploring harmony search for power system optimization: Applications, formulations, and open problems. Appl. Energy 2025, 398, 126452. [Google Scholar] [CrossRef]
  16. Khaleel, S.; Zaher, H.; Raeid, S.N. A Novel Hybrid Harmony Search (HS) with War Strategy Optimization (WSO) for solving optimization problems. Mech. Contin. Math. Sci. 2024, 19, 27–46. [Google Scholar] [CrossRef]
  17. Wang, B.; Zhang, P.; Wang, X.; Pan, Q. Three-way decision based island harmony search algorithm for robust flow-shop scheduling with uncertain processing times depicted by big data. Appl. Soft Comput. 2024, 162, 111842. [Google Scholar] [CrossRef]
  18. Li, J.; Xue, S.; Li, M.; Shi, X. Research on the performance of harmony search with local search algorithms for solving flexible job-shop scheduling problem. J. Intell. Fuzzy Syst. 2025, 48, 291–304. [Google Scholar] [CrossRef]
  19. Qu, B.Y.; Suganthan, P.N.; Das, S. A distance-based locally informed particle swarm model for multi-modal optimization. IEEE Trans. Evol. Comput. 2013, 17, 387–402. [Google Scholar] [CrossRef]
  20. Nawaz, M.; Enscore, J.E.E.; Ham, I. A heuristic algorithm for the m-machine, n-job flow shop sequencing problem. OMEGA Int. J. Manag. Sci. 1983, 11, 91–95. [Google Scholar] [CrossRef]
  21. Ta, Q.C.; Pham, P.M. Integrated flowshop and vehicle routing problem based on tabu search algorithm. Int. J. Comput. (IJC) 2022, 43, 24–35. [Google Scholar]
  22. Zhou, Y.; Xu, W.; Fu, Z.H.; Zhou, M.C. Multi-neighborhood simulated annealing-based iterated local search for colored traveling salesman problems. IEEE Trans. Intell. Transp. Syst. 2022, 23, 16072–16082. [Google Scholar] [CrossRef]
  23. Taillard, E. Benchmarks for basic scheduling problems. Eur. J. Oper. Res. 1993, 64, 278–285. [Google Scholar] [CrossRef]
  24. Li, Y.Z.; Li, X.P.; Gupta, J.N.D. Solving the multi-objective flowline manufacturing cell scheduling problem by hybrid harmony search. Expert Syst. Appl. 2015, 42, 1409–1417. [Google Scholar] [CrossRef]
  25. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  26. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  27. Hansen, N.; Müller, S.D.; Koumoutsakos, P. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol. Comput. 2003, 11, 1–18. [Google Scholar] [CrossRef] [PubMed]
  28. Victor, F.V.; Framinan, J.M. A bounded-search iterated greedy algorithm for the distributed permutation flowshop scheduling problem. Int. J. Prod. Res. 2015, 53, 1111–1123. [Google Scholar]
  29. Pan, Y.; Gao, K.; Li, Z.; Wu, N. Solving biobjective distributed flow-shop scheduling problems with lot-streaming using an improved Jaya algorithm. IEEE Trans. Cybern. 2023, 53, 3818–3828. [Google Scholar] [CrossRef]
  30. Pan, Y.X.; Gao, K.Z.; Li, Z.W.; Wu, N.Q. A Novel Evolutionary Algorithm for Scheduling Distributed No-Wait Flow Shop Problems. IEEE Trans. Syst. Man Cybern. Syst. 2024, 54, 3694–3704. [Google Scholar] [CrossRef]
Figure 1. A simple example of a Gantt chart for DPFSP.
Figure 1. A simple example of a Gantt chart for DPFSP.
Mathematics 13 02640 g001
Figure 2. Multivariate analysis (rh and rp).
Figure 2. Multivariate analysis (rh and rp).
Mathematics 13 02640 g002
Figure 3. Analysis of parameter rh.
Figure 3. Analysis of parameter rh.
Mathematics 13 02640 g003
Figure 4. Analysis of parameter rp.
Figure 4. Analysis of parameter rp.
Mathematics 13 02640 g004
Figure 5. Multivariate analysis of variance for job and algorithm.
Figure 5. Multivariate analysis of variance for job and algorithm.
Mathematics 13 02640 g005
Figure 6. Multivariate analysis of variance for factory and algorithm.
Figure 6. Multivariate analysis of variance for factory and algorithm.
Mathematics 13 02640 g006
Figure 7. Multivariate analysis of variance for machine and algorithm.
Figure 7. Multivariate analysis of variance for machine and algorithm.
Mathematics 13 02640 g007
Figure 8. The multifactor ANOVA (job and algorithm).
Figure 8. The multifactor ANOVA (job and algorithm).
Mathematics 13 02640 g008
Figure 9. The multifactor ANOVA (factory and algorithm).
Figure 9. The multifactor ANOVA (factory and algorithm).
Mathematics 13 02640 g009
Figure 10. The multifactor ANOVA (machine and algorithm).
Figure 10. The multifactor ANOVA (machine and algorithm).
Mathematics 13 02640 g010
Table 1. An example of HM.
Table 1. An example of HM.
Solution StructureJob Sequence
Processing Factory 1Processing Factory 2Processing Factory 3
{3,3,3}{1,3,4}{2,5,9}{6,7,8}
{4,2,3}{1,3,4,9}{2,5}{6,7,8}
Table 2. Comparison of parameter combinations.
Table 2. Comparison of parameter combinations.
Parameters
(rh, rp)
MSavgSDParameters
(rh, rp)
MSavgSDParameters
(rh, rp)
MSavgSD
(0.70, 0.10)4663 27.77 (0.80, 0.10)4648 27.55 (0.90, 0.10)4614 30.80
(0.70, 0.15)4659 28.63 (0.80, 0.15)4658 33.93 (0.90, 0.15)4626 35.52
(0.70, 0.20)4675 38.42 (0.80, 0.20)4665 30.51 (0.90, 0.20)4641 26.09
(0.70, 0.25)4679 35.49 (0.80, 0.25)4665 28.00 (0.90, 0.25)4652 29.08
(0.70, 0.30)4689 31.21 (0.80, 0.30)4679 32.71 (0.90, 0.30)4654 25.69
(0.70, 0.35)4687 30.32 (0.80, 0.35)4680 31.18 (0.90, 0.35)4663 28.29
(0.75, 0.10)4651 31.57 (0.85, 0.10)4627 26.51 (0.95, 0.10)4601 31.54
(0.75, 0.15)4659 41.57 (0.85, 0.15)4644 28.50 (0.95, 0.15)4614 38.53
(0.75, 0.20)4666 38.24 (0.85, 0.20)4652 30.50 (0.95, 0.20)4627 31.48
(0.75, 0.25)4672 33.12 (0.85, 0.25) 4661 31.96 (0.95, 0.25)4636 38.94
(0.75, 0.30) 4678 35.82 (0.85, 0.30)4667 31.27 (0.95, 0.30)4648 26.03
(0.75, 0.35)4689 31.06 (0.85, 0.35)4673 33.33 (0.95, 0.35)4651 37.61
Table 3. Comparison of HS and IOHS.
Table 3. Comparison of HS and IOHS.
Parameters
(J, F, M)
HSIOHSParameters
(J, F, M)
HSIOHSParameters
(J, F, M)
HSIOHS
MSavgeSDMSavgeSDMSavgeSDMSavgeSDMSavgeSDMSavgeSD
(60, 2, 5)184759.67 177485.98 (150, 5, 20)328522.13 285822.52 (510, 3, 10)10,48368.54 944272.12
(60, 2, 10)231843.87 211946.85 (150, 10, 5)120326.24 96526.41 (510, 3, 20)11,78252.25 10,36567.35
(60, 2, 20)310239.85 287144.02 (150, 10, 10)161815.82 133312.97 (510, 5, 5)604939.70 552860.17
(60, 3, 5)134941.52 123753.40 (150, 10, 20)233313.43 200915.18 (510, 5, 10)674849.31 583744.61
(60, 3, 10)175136.66 158741.69 (330, 2, 5)9159110.49 8866152.78 (510, 5, 20)791350.26 678946.79
(60, 3, 20)248529.93 228431.09 (330, 2, 10)10,007101.23 9230115.82 (510, 10, 5)335136.66 283449.87
(60, 5, 5)93726.17 81721.86 (330, 2, 20)11,37473.20 10,19555.13 (510, 10, 10)389332.43 320031.85
(60, 5, 10)130816.21 116616.31 (330, 3, 5)631470.25 593289.85 (510, 10, 20)484617.14 405318.84
(60, 5, 20)197718.54 181020.67 (330, 3, 10)707829.42 632269.01 (600, 2, 5)16,347149.29 15,977214.21
(60, 10, 5)62618.04 52516.41 (330, 3, 20)823739.13 722340.27 (600, 2, 10)17,48595.85 16,365173.86
(60, 10, 10)96218.71 83815.31 (330, 5, 5)402041.91 361564.45 (600, 2, 20)19,13863.66 17,27982.79
(60, 10, 20)159328.57 145026.87 (330, 5, 10)465127.79 397439.79 (600, 3, 5)11,205126.23 10,692164.90
(150, 2, 5)437356.27 421496.23 (330, 5, 20)566731.89 485731.03 (600, 3, 10)12,13776.40 11,008115.43
(150, 2, 10)490750.04 447756.65 (330, 10, 5)228021.91 188128.93 (600, 3, 20)13,56054.99 11,97960.05
(150, 2, 20)592342.96 529351.22 (330, 10, 10)278724.98 226421.67 600, 5, 5)706068.62 6478103.95
(150, 3, 5)303742.75 281956.91 (330, 10, 20)363411.06 303915.93 (600, 5, 10)780648.11 681974.35
(150, 3, 10)357138.06 313745.88 (510, 2, 5)14,136150.29 13,810221.83 (600, 5, 20)900225.86 774532.39
(150, 3, 20)450244.06 397236.72 (510, 2, 10)14,994114.14 13,997218.39 (600, 10, 5)386329.49 329541.03
(150, 5, 5)199828.29 173331.89 (510, 2, 20)16,59963.02 14,94771.85 (600, 10, 10)446437.89 368727.79
(150, 5, 10)246535.25 210231.97 (510, 3, 5)952866.37 905786.60 (600, 10, 20)543724.15 452930.10
Table 4. Comparison of algorithms.
Table 4. Comparison of algorithms.
Parameters
(J, F, M)
BSIGJayaNEAIOHSParameters
(J, F, M)
BSIGJayaNEAIOHS
MSavgSDMSavgSDMSavgSDMSavgSDMSavgSDMSavgSDMSavgSDMSavgSD
(60, 2, 5)1828 76.06 1782 81.81 1773 86.77 1774 85.98 (330, 5, 5)3713 54.05 3641 56.15 3643 57.86 3615 64.45
(60, 2, 10)2249 45.19 2144 50.97 2100 44.35 2119 46.85 (330, 5, 10)4181 41.45 4028 32.49 4024 29.86 3974 39.79
(60, 2, 20)3024 46.37 2897 41.33 2842 40.58 2871 44.02 (330, 5, 20)5081 29.97 4894 24.74 4884 25.15 4857 31.03
(60, 3, 5)1248 51.13 1254 49.12 1236 54.23 1237 53.40 (330, 10, 5)1973 31.82 1920 28.17 1910 29.77 1881 28.93
(60, 3, 10)1613 37.14 1618 43.27 1573 41.74 1587 41.69 (330, 10, 10)2386 20.95 2308 19.54 2294 21.57 2264 21.67
(60, 3, 20)2318 38.24 2321 31.82 2261 29.95 2284 31.09 (330, 10, 20)3180 24.70 3072 20.41 3059 19.05 3039 15.93
(60, 5, 5)828 23.70 841 23.85 810 24.20 817 21.86 (510, 2, 5)13,899 244.89 13,811 220.65 13,816 217.14 13,810 221.83
(60, 5, 10)1182 15.36 1193 15.92 1153 15.48 1166 16.31 (510, 2, 10)14,225 169.83 14,006 213.08 14,059 192.12 13,997 218.39
(60, 5, 20)1833 21.33 1845 20.33 1795 23.04 1810 20.67 (510, 2, 20)15,393 78.56 15,016 81.52 15,028 68.95 14,947 71.85
(60, 10, 5)534 17.93 544 18.71 522 17.93 525 16.41 (510, 3, 5)9160 86.36 9073 78.28 9087 78.83 9057 86.60
(60, 10, 10)855 15.43 867 17.02 832 16.06 838 15.31 (510, 3, 10)9719 92.83 9503 71.43 9522 68.70 9442 72.12
(60, 10, 20)1472 23.32 1484 25.26 1438 24.18 1450 26.87 (510, 3, 20)10,761 72.29 10,445 53.55 10,430 68.87 10,365 67.35
(150, 2, 5)4270 87.90 4215 95.43 4214 95.93 4214 96.23 (510, 5, 5)5640 42.93 5556 55.63 5563 54.40 5528 60.17
(150, 2, 10)4655 45.47 4482 58.67 4469 60.62 4477 56.65 (510, 5, 10)6085 38.79 5912 43.12 5926 46.02 5837 44.61
(150, 2, 20)5544 50.02 5295 56.45 5271 50.94 5293 51.22 (510, 5, 20)7084 51.09 6853 48.51 6839 49.07 6789 46.79
(150, 3, 5)2897 64.82 2828 54.20 2819 57.42 2819 56.91 (510, 10, 5)2940 41.05 2881 43.86 2883 43.27 2834 49.87
(150, 3, 10)3325 42.00 3176 40.52 3131 42.94 3137 45.88 (510, 10, 10)3354 27.62 3258 34.22 3253 35.61 3200 31.85
(150, 3, 20)4164 48.92 4008 38.22 3954 35.18 3972 36.72 (510, 10, 20)4234 22.19 4094 19.83 4087 14.27 4053 18.84
(150, 5, 5)1823 27.40 1765 30.48 1738 29.36 1733 31.89 (600, 2, 5)16,042 198.69 15,980 213.34 15,987 213.33 15,977 214.21
(150, 5, 10)2225 38.66 2141 33.31 2099 32.04 2102 31.97 (600, 2, 10)16,647 160.67 16,399 151.60 16,422 152.45 16,365 173.86
(150, 5, 20)3004 23.39 2887 26.52 2843 26.76 2858 22.52 (600, 2, 20)17,773 60.58 17,379 56.36 17,397 56.30 17,279 82.79
(150, 10, 5)972 27.17 992 25.35 967 26.68 965 26.41 (600, 3, 5)10,804 133.22 10,707 153.92 10,721 146.39 10,692 164.90
(150, 10, 10)1342 15.53 1365 14.41 1332 11.36 1333 12.97 (600, 3, 10)11,299 106.39 11,065 97.58 11,106 87.11 11,008 115.43
(90, 10, 20)2024 16.33 2044 14.85 2003 16.29 2009 15.18 (600, 3, 20)12,389 71.59 12,062 64.43 12,052 59.50 11,979 60.05
(330, 2, 5)8941 131.44 8867 151.42 8875 147.90 8866 152.78 (600, 5, 5)6601 61.92 6512 83.50 6520 85.59 6478 103.95
(330, 2, 10)9451 117.41 9244 112.69 9255 106.97 9230 115.82 (600, 5, 10)7065 37.97 6894 67.23 6904 58.24 6819 74.35
(330, 2, 20)10,575 62.94 10,217 69.41 10,240 60.97 10,195 55.13 (600, 5, 20)8065 43.40 7808 34.97 7798 36.49 7745 32.39
(330, 3, 5)6036 99.49 5939 87.41 5947 88.49 5932 89.85 (600, 10, 5)3402 36.13 3343 32.79 3348 38.13 3295 41.03
(330, 3, 10)6567 58.99 6363 54.33 6378 52.69 6322 69.01 (600, 10, 10)3871 31.92 3763 28.02 3753 31.21 3687 27.79
(330, 3, 20)7530 48.89 7277 39.06 7254 39.22 7223 40.27 (600, 10, 20)4742 32.70 4580 26.85 4569 28.40 4529 30.10
Table 5. Comparison of algorithms based on ARPD.
Table 5. Comparison of algorithms based on ARPD.
Parameters
(J, F, M)
ARPD (%)Parameters
(J, F, M)
ARPD (%)Parameters
(J, F, M)
ARPD (%)
BSIGJayaNEAHHSBSIGJayaNEAHHSBSIGJayaNEAHHS
(60, 2, 5)4.61 1.95 0.68 0.70 (150, 5, 20)0.90 0.19 0.33 0.06 (510, 3, 10)1.15 0.53 1.17 0.16
(60, 2, 10)8.63 3.05 0.30 1.25 (150, 10, 5)3.12 0.23 0.78 0.06 (510, 3, 20)0.79 0.49 0.77 0.18
(60, 2, 20)0.87 0.24 0.24 0.10 (150, 10, 10)2.15 0.21 0.49 0.08 (510, 5, 5)1.77 0.88 0.91 0.29
(60, 3, 5)2.14 0.13 0.22 0.04 (150, 10, 20)1.52 0.19 0.37 0.07 (510, 5, 10)1.43 0.66 1.90 0.18
(60, 3, 10)2.32 0.24 0.34 0.10 (330, 2, 5)4.14 1.60 0.23 0.35 (510, 5, 20)0.70 0.44 0.90 0.16
(60, 3, 20)1.61 0.17 0.26 0.06 (330, 2, 10)2.31 0.63 0.60 0.20 (510, 10, 5)0.82 0.54 2.14 0.20
(60, 5, 5)3.67 0.22 0.46 0.07 (330, 2, 20)1.09 0.51 0.75 0.20 (510, 10, 10)0.86 0.56 2.04 0.26
(60, 5, 10)2.78 0.26 0.40 0.09 (330, 3, 5)2.62 0.84 0.46 0.27 (510, 10, 20)0.56 0.35 1.08 0.15
(60, 5, 20)2.01 0.18 0.25 0.05 (330, 3, 10)2.04 0.62 1.21 0.20 (600, 2, 5)1.23 0.59 0.16 0.21
(60, 10, 5)4.72 0.25 0.52 0.09 (330, 3, 20)1.10 0.52 0.77 0.19 (600, 2, 10)1.04 0.55 0.50 0.18
(60, 10, 10)3.87 0.21 0.45 0.05 (330, 5, 5)3.18 0.87 0.99 0.30 (600, 2, 20)0.66 0.41 0.76 0.17
(60, 10, 20)2.66 0.17 0.45 0.07 (330, 5, 10)2.28 0.76 1.62 0.28 (600, 3, 5)1.31 0.83 0.51 0.30
(150, 2, 5)1.64 0.28 0.03 0.07 (330, 5, 20)1.24 0.57 0.93 0.21 (600, 3, 10)1.17 0.64 1.13 0.24
(150, 2, 10)1.33 0.87 0.23 0.28 (330, 10, 5)1.31 0.59 1.98 0.20 (600, 3, 20)0.69 0.49 0.73 0.20
(150, 2, 20)1.06 0.79 0.18 0.26 (330, 10, 10)1.00 0.50 1.84 0.18 (600, 5, 5)1.58 0.78 0.97 0.30
(150, 3, 5)0.62 0.39 0.09 0.15 (330, 10, 20)0.99 0.37 0.93 0.12 (600, 5, 10)1.29 0.71 1.57 0.24
(150, 3, 10)0.66 0.30 0.29 0.07 (510, 2, 5)1.56 0.68 0.14 0.23 (600, 5, 20)0.79 0.49 0.86 0.17
(150, 3, 20)0.55 0.19 0.22 0.07 (510, 2, 10)1.29 0.66 0.71 0.27 (600, 10, 5)1.45 0.77 2.00 0.25
(150, 5, 5)1.94 0.40 0.41 0.11 (510, 2, 20)0.78 0.50 0.83 0.17 (600, 10, 10)1.27 0.64 2.07 0.23
(150, 5, 10)1.44 0.22 0.32 0.07 (510, 3, 5)1.39 0.55 0.55 0.13 (600, 10, 20)0.67 0.41 1.14 0.15
Table 6. Hypothesis test summary.
Table 6. Hypothesis test summary.
Null HypothesisTestSig.Decision
1The median of differences between BSIG and IOHS equals 0Related samples Wilcoxon signed rank test<0.001Reject the null hypothesis
2The median of differences between Jaya and IOHS equals 0Related samples Wilcoxon signed rank test<0.001Reject the null hypothesis
3The median of differences between NEA and IOHS equals 0Related samples Wilcoxon signed rank test<0.001Reject the null hypothesis
Asymptotic significances are displayed. The significance level is 0.05.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shen, H.; Cheng, Y.; Li, Y. A Hybrid Harmony Search Algorithm for Distributed Permutation Flowshop Scheduling with Multimodal Optimization. Mathematics 2025, 13, 2640. https://doi.org/10.3390/math13162640

AMA Style

Shen H, Cheng Y, Li Y. A Hybrid Harmony Search Algorithm for Distributed Permutation Flowshop Scheduling with Multimodal Optimization. Mathematics. 2025; 13(16):2640. https://doi.org/10.3390/math13162640

Chicago/Turabian Style

Shen, Hong, Yuwei Cheng, and Yazhi Li. 2025. "A Hybrid Harmony Search Algorithm for Distributed Permutation Flowshop Scheduling with Multimodal Optimization" Mathematics 13, no. 16: 2640. https://doi.org/10.3390/math13162640

APA Style

Shen, H., Cheng, Y., & Li, Y. (2025). A Hybrid Harmony Search Algorithm for Distributed Permutation Flowshop Scheduling with Multimodal Optimization. Mathematics, 13(16), 2640. https://doi.org/10.3390/math13162640

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop