Next Article in Journal
Review of Fault-Tolerant Control Methods for Suspension Systems: From Road Vehicles to Maglev Trains
Previous Article in Journal
Bayesian Deep Learning and Bayesian Statistics to Analyze the European Countries’ SARS-CoV-2 Policies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Co-Evolutionary Algorithm for Two-Stage Hybrid Flow Shop Scheduling Problem with Suspension Shifts

by
Zhijie Huang
,
Lin Huang
and
Debiao Li
*
Management Science and Engineering Department, Fuzhou University, Fuzhou 350116, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(16), 2575; https://doi.org/10.3390/math12162575
Submission received: 23 July 2024 / Revised: 16 August 2024 / Accepted: 19 August 2024 / Published: 20 August 2024
(This article belongs to the Section Engineering Mathematics)

Abstract

:
Demand fluctuates in actual production. When manufacturers face demand under their maximum capacity, suspension shifts are crucial for cost reduction and on-time delivery. In this case, suspension shifts are needed to minimize idle time and prevent inventory buildup. Thus, it is essential to integrate suspension shifts with scheduling under an uncertain production environment. This paper addresses the two-stage hybrid flow shop scheduling problem (THFSP) with suspension shifts under uncertain processing times, aiming to minimize the weighted sum of earliness and tardiness. We develop a stochastic integer programming model and validate it using the Gurobi solver. Additionally, we propose a dual-space co-evolutionary biased random key genetic algorithm (DCE-BRKGA) with parallel evolution of solutions and scenarios. Considering decision-makers’ risk preferences, we use both average and pessimistic criteria for fitness evaluation, generating two types of solutions and scenario populations. Testing with 28 datasets, we use the value of the stochastic solution (VSS) and the expected value of perfect information (EVPI) to quantify benefits. Compared to the average scenario, the VSS shows that the proposed algorithm achieves additional value gains of 0.9% to 69.9%. Furthermore, the EVPI indicates that after eliminating uncertainty, the algorithm yields potential improvements of 2.4% to 20.3%. These findings indicate that DCE-BRKGA effectively supports varying decision-making risk preferences, providing robust solutions even without known processing time distributions.

1. Introduction

Suspension shifts are crucial for enterprises to adapt to market demand changes and to regulate production capacity by controlling working hours, ensuring on-time order delivery. In actual production, demand fluctuates. When demand is below maximum capacity, suspension shifts reduce production capacity, minimizing machine idle time, operational costs, and inventory due to early completions. Considering that in a production workshop, most processes involve human–machine collaboration and are influenced by fixed work schedules and shift handovers, the decision to suspend operations must be made according to the shift. Additionally, suspending all machines in the workshop simultaneously can maximize cost reduction, so all operations in the same workshop should be suspended according to the same shift. Therefore, production planning typically schedules workshop suspension shifts based on actual orders, creating non-production periods that constrain workshop scheduling.
However, if the suspension shift plan is not effectively integrated with production scheduling, it can fail to reduce operational costs and inventory, leading to lower fulfillment rates and financial losses. For example, as shown in Figure 1, there are six shifts, and the release times and due dates of three jobs, J 1 , J 2 , and J 3 , are r 1 , r 2 , and r 3 , and d 1 , d 2 , and d 3 , respectively. In case 1, job J 2 ’s arrival time r 2 causes machine idling, which the suspension shift plan cannot predict, leading to early completion, excess operational costs, and increased inventory. Even if the operation is suspended in shift 2, early completion and inventory issues persist. In case 2, an unreasonable suspension shift plan delays job J 3 , resulting in penalties and customer loss. In case 3, a reasonable suspension shift plan effectively reduces operational costs and inventory. Thus, it is essential to integrate workshop suspension shifts with production scheduling. This ensures that the suspension shift plan effectively regulates production capacity and promotes on-time delivery.
The hybrid flow shop scheduling problem (HFSP) is a class of NP-hard combinatorial optimization problems [1] that is widely applied in industries such as chemicals, semiconductors, metallurgy, textiles, and logistics. The HFSP involves multiple production stages with parallel machines, where jobs must be processed in sequence. This paper investigates the two-stage hybrid flow shop scheduling problem (THFSP), which is a common variant of HFSP that is distinguished by its restriction to two production stages, which simplifies the general problem while retaining significant theoretical and practical importance [2]. The THFSP is particularly relevant in applications such as PCB assembly, metal fabrication, and two-stage packaging processes. Meanwhile, the heterogeneity of production and the diversity of orders make it difficult to accurately predict the processing time of jobs. This makes it even harder to ensure the rationality of pre-arranged workshop suspension shifts. If the actual processing time exceeds the estimated time, work orders will be overdue. Conversely, if the actual processing time is shorter than the estimated time, work orders will be completed early. Therefore, researching two-stage hybrid flow shop scheduling with suspension shifts is of significant importance in an environment with uncertain processing times.
The goal of this work is to provide decision-makers with robust scheduling solutions in the workshop. Considering suspension shifts and uncertain processing times, we develop a method based on a dual-space co-evolutionary biased random key genetic algorithm (DCE-BRKGA). In this method, parallel populations of solutions and scenarios co-evolve, with each relying on the other for the fitness evaluation of their individuals [3]. This approach aims to generate representative and diverse scenario populations and measure their impact on solution populations. Simultaneously, the solutions evolve based on different decision-making risk profiles, and we collectively evaluate the performance of the scenarios.
The primary contributions of this paper are related to the proposed problems, solution methods, and comprehensive experiments:
  • We propose a stochastic integer programming model to address the two-stage hybrid flow shop scheduling problem with suspension shifts under uncertain processing times. This model regards suspension shifts as a crucial tool to adjust capacity and ensure on-time delivery, and it aims to minimize the weighted sum of job earliness and tardiness. By incorporating constraints on job sequencing, machine utilization, and suspension shifts, the model can flexibly adjust capacity in complex production environments, ensuring the effective execution of production plans.
  • In a deterministic environment, we test the performance of a biased random key genetic algorithm (BRKGA) by comparing it with Gurobi and the random key genetic algorithm (RKGA). The BRKGA’s final solutions are, on average, 7.08% worse than Gurobi’s optimal solutions. Additionally, we compare the optimal solutions generated by RKGA and BRKGA under two different fitness criteria. The results show that BRKGA’s optimal solutions are, on average, 49.7% and 50.4% better than those of RKGA, demonstrating its ability to produce superior solutions.
  • For uncertain environments, we propose the DCE-BRKGA, which incorporates parallel population co-evolution. When applying the pessimistic criterion, we designed a specialized scenario fitness function based on Lemmas 1 and 2. It transforms scenario evolution into a maximization problem, while solution evolution is treated as a minimization problem. It clarifies the search direction and speeds up the convergence of the algorithm.
  • We evaluate the performance of the DCE-BRKGA by the value of the stochastic solution (VSS) and the expected value of perfect information (EVPI). The VSS shows that the proposed algorithm can achieve additional value gains of 0.9% to 69.9% compared to the average scenario. Furthermore, the EVPI indicates that after eliminating uncertainty, the algorithm yields potential improvements of 2.4% to 20.3%. This is particularly useful for manufacturers as it helps them understand the value of investing in solutions to reduce uncertainty or to improve prediction accuracy.
The structure of this article is as follows. Initially, the proposed problem is stated and a mathematical model is presented in Section 3. Subsequently, a co-evolutionary algorithm is introduced to solve the NP-hard problem in Section 4, and the results of computational tests are discussed in Section 5. Finally, conclusions are drawn and future work and research directions are discussed in Section 6.

2. Literature Review

This study employs a co-evolutionary algorithm to conduct two-stage hybrid flow shop scheduling with suspension shifts in an environment of uncertain processing times. The research expands upon previous works in three key aspects: workshop scheduling considering suspending operations, workshop scheduling with uncertain processing times, and the application of co-evolutionary algorithms.

2.1. Workshop Scheduling Research Considering Suspending Operations

Workshop suspensions adjust production capacity by controlling machine availability time. Research on workshop scheduling considering machine availability can be categorized into two types:
Optimizing the scheduling plan: This approach treats machine availability as a constraint. For instance, Lin et al. [4] considered constraints due to regular maintenance and shift changes, employing an online algorithm to address the parallel machine scheduling problem for uninterrupted job processing. Nguyen [5] introduced a polynomial-time approximation scheme for minimizing the total completion time of two parallel machines. Yu et al. [6] examined the proportional flow shop scheduling problem with periodic machine maintenance. Detti et al. [7] aimed to minimize completion time in single-machine scheduling problems with uncertain maintenance activities using robust scheduling methods.
Collaborative optimization research: This type focuses on the synergy between machine availability planning and scheduling planning. Lee and Leon [8] studied the collaborative optimization of single-machine variable-rate scheduling, regarding maintenance as a tool to enhance equipment utilization and discussing strategies to minimize total processing time and delays within a single maintenance cycle. Nourelfath et al. [9] proposed a joint model for parallel machine system scheduling and maintenance. Lu et al. [10] suggested a new integrated model to dynamically adjust maintenance plans, noting that periodic maintenance might lead to excessive maintenance. Liu Yu et al. [11] introduced a parallel-machine production synchronization evaluation model, integrating production plans with preventive maintenance scheduling using the moment-matching method. Zheng et al. [12] explored the energy-saving two-stage hybrid flow shop scheduling problem, combining time-of-use electricity pricing decisions with machine working states and production plans to optimize the maximum completion time and total energy consumption.
These studies underscore the significance of joint decision-making in machine availability planning and production planning for enhancing production efficiency, reducing costs, and ensuring product quality. This area has garnered substantial attention in the academic community.

2.2. Workshop Scheduling Research Considering Uncertain Processing Time

Early research on workshop scheduling optimization was often based on idealized assumptions. However, uncertainties in actual production environments—such as fluctuations in equipment performance, failures, and maintenance needs—make accurate prediction of processing times challenging. This makes the in-depth study of processing time uncertainty crucial for optimizing scheduling, improving efficiency, and reducing risks. Current research on scheduling under uncertain processing times can be divided into three main directions:
Rescheduling: This approach generates an initial scheduling plan based on current workshop information and adjusts the plan according to actual conditions when random disturbances occur. For example, Zadeh [13] initially schedules based on estimated processing times and then reschedules after determining the actual times, using the artificial bee colony algorithm to solve the dynamic flexible workshop scheduling problem with the goal of minimizing the maximum completion time. Framinan et al. [14] explored using real-time completion times of pipeline operations to reschedule jobs, demonstrating through computational experiments that rescheduling policies are effective only if the processing time variability is low and the initial plan quality is good.
Stochastic scheduling: This approach uses known probability distributions or relevant empirical data about job processing times, employing methods based on expected indicators for decision-making. Yue et al. [15] addressed the timing window allocation scheduling problem with stochastic processing times, using a branch-and-bound algorithm to solve the single-machine scheduling problem optimally. Ghaedy-Heidary et al. [16] proposed a simulation optimization framework combining genetic algorithms and simulation models for the stochastic flexible job shop scheduling problem. Liu et al. [17] studied the stochastic parallel-machine scheduling problem with uncertain job arrivals and processing times, proposing a two-stage method to minimize the total cost: first assigning jobs when uncertainty is unknown, then scheduling when uncertainty is known.
Robust scheduling: This approach pre-identifies and evaluates potential uncertainty events during production, incorporating their impact into the preliminary scheduling plan to minimize worst-case performance. Lu et al. [18] studied the single-machine scheduling problem with uncertain processing times, using simple iterative improvement heuristics and simulated annealing heuristics. Wang et al. [19] investigated the robust scheduling problem for identical parallel machines with uncertain processing times, considering the possibility of outsourcing jobs. Xiao et al. [20] examined the job shop scheduling problem with stochastic deteriorating processing times, measuring schedule robustness by the expected deviation between realized and initial completion times.
Therefore, studying the randomness of processing times is crucial for optimizing scheduling and reducing risks. Scholars have proposed three methods—rescheduling, stochastic scheduling, and robust scheduling—to address this challenge, each with its advantages and disadvantages and suitable for different production environments. In practical applications, it is necessary to choose the appropriate scheduling strategy based on specific conditions.

2.3. Application Research on Co-Evolutionary Algorithms

To address the complex optimization challenges characterized by large scales, multiple objectives, and uncertainty, co-evolutionary algorithms have become a significant research area in evolutionary computation. These algorithms improve performance, efficiency, and robustness [21,22,23]. Co-evolutionary algorithms have received considerable attention in manufacturing production [24] and various other fields [25]. Population cooperation is the primary method to achieve co-evolution [26]. Co-evolution can be categorized into competitive and cooperative types based on different inter-population evaluation methods. The following discusses research on competitive and cooperative co-evolutionary algorithms for solving uncertainty problems.
Competitive co-evolutionary algorithms are based on the principle of intermediate competition in ecology, where improvement in one population exerts selective pressure on other populations, thereby affecting their evolution. Gu et al. [27] proposed a competitive co-evolutionary quantum genetic algorithm to solve job shop scheduling problems with uncertain processing times. They designed three interspecific competition strategies for population evaluation and dynamically adjusted the population sizes to increase the genetic diversity and prevent premature convergence.
Cooperative co-evolutionary algorithms decompose complex problems into several subproblems, which are solved by various populations through multi-population cooperation for collaborative evaluation. Herrmann [28] was the first to design a dual-population co-evolutionary genetic algorithm for robust scheduling of parallel machines under uncertain processing times. Jensen [29] introduced a ranking-based scenario fitness evaluation, correcting the symmetry and bias issues of the original method. Oliveira et al. [3] proposed a dual-space co-evolutionary biased random key genetic algorithm for car rental problems with uncertain demand, where solution and scenario parallel populations co-evolve.
Current research shows that co-evolutionary algorithms have been widely applied in multiple fields, but there has been less research on dealing with uncertainties. The application of co-evolutionary algorithms to uncertainty problems can be categorized into competitive and cooperative types. Competitive co-evolutionary algorithms improve algorithmic performance through competitive strategies, while cooperative co-evolutionary algorithms decompose uncertainty problems, separating uncertain factors from certain ones to allow parallel populations of solutions and scenarios to co-evolve. Cooperative methods are simple to operate and have strong applicability, supporting the design of scenarios with different decision risk preferences and ensuring solution quality. However, their application has not been fully explored and utilized, necessitating further in-depth research.

3. Problem Definition

This paper discusses the two-stage hybrid flow shop scheduling problem with suspension shifts under uncertain processing times, aiming to obtain production schedules and a workshop suspension shift plan during the planning period. The production schedule aims for products to be completed exactly on the due date; early completion increases operational and inventory costs, while late completion results in penalties for breach of contract. Therefore, the objective of this study’s production scheduling is to minimize the weighted sum of earliness and tardiness of jobs, achieving on-time delivery to reduce operational and inventory costs and to minimize the penalties due to delays.
The problem under investigation can be characterized as follows: Given a set of jobs N = { 1 , 2 , , n } , each job undergoes processing operations in a two-stage hybrid flow shop, where each stage has m 1 and m 2 identical parallel machines, respectively. Each job requires two operations, with the first operation processed at the first stage and the second operation processed at the second stage, and each operation can be assigned to any machine k within its corresponding stage. Assuming each job’s processing at each stage as a single job, the problem assumptions are as follows:
  • All jobs must pass through the stages in the same order, and a job can only proceed to the next stage after completing the current one;
  • A job cannot be processed on different machines simultaneously;
  • Each machine can process only one job at a time without preemption;
  • Parallel machines at each stage have identical technical features, production capacities, and processing speeds;
  • All jobs and machines are available at time 0;
  • Buffer areas between machines are sufficiently large, allowing processed jobs to wait;
  • Job setup times are included in the processing times or are negligible;
  • Machines cannot process jobs during suspension shifts, but if a job is interrupted by a suspension shift, it can resume processing after the suspension shift ends;
  • All machines follow the same suspension shift schedule to reduce operational costs.
The heterogeneity of workshop environments and the diversity of orders make predicting job processing times challenging, leading to uncertainties in actual production environments. Therefore, this study considers the limited known information about job processing times: specifically, the upper and lower bounds for each stage.
Most manufacturing companies operate on a two-shift system, dividing each day into two 12 h shifts (day and night). When planning a suspension shift schedule, it is necessary to schedule the production of current jobs and determine the shifts during the week when suspension should occur. However, to keep the production line running, the shop must ensure at least four working days per week, allowing for no more than six suspension shifts. Since the objective of this study is to minimize the weighted sum of job earliness and tardiness, we aim to maximize the number of workshop suspension shifts without worsening this objective value, achieving capacity optimization and on-time delivery of orders. Let t m represent the number of suspension shifts required. The problem then transforms into finding the optimal solution for collaborative optimization by starting from t m = 0 and incrementally adding one suspension shift until the objective value worsens or the number of suspension shifts exceeds six ( t m > 6 ) , as shown in Figure 2. Additionally, if a day is divided into three, four, or more shifts, the algorithm can still be applied by simply recalculating the number of shifts and adjusting the constraints on t m according to the specific situation. For example, three shifts means that a shift is 8 h, and t m can be calculated accordingly.

Problem Modeling

To formalize the problem, we develop a stochastic integer programming model. Following the detailed problem description and the set of assumptions outlined earlier, we introduce pertinent symbols to enhance clarity, as presented in Table 1.

Mathematical Modeling

D e c i s i o n v a r i a b l e s :
Z i j l k 0–1 variable: Z i j l k = 1 if job j is machined immediately after job i on machine k in
stage l; otherwise, Z i j l k = 0 .
X t 0–1 variable: X t = 1 if a suspension occurs on shift t; otherwise, X t = 0 .
Y i l t 0–1 variable: Y i l t = 1 if the end machining time of job i is at shift t in stage l;
otherwise, Y i l t = 0 .
R i l t 0–1 variable: R i l t = 1 if job i is machined across shift t in stage l; otherwise, R i l t = 0 .
O p t i m i z a t i o n m o d e l :
M i n i N ( α i E i + β i T i )
C o n s t r a i n t s :
E i 0 , i
E i d i + s i 2 + p ˜ i 2 + X t ( Y i 2 t + R i 2 t ) μ 0 , i , t
T i 0 , i
T i + d i s i 2 + p ˜ i 2 + X t ( Y i 2 t + R i 2 t ) μ 0 , i , t
k M l i N 0 Z i j l k = 1 , l , j
j N Z 0 j l k 1 , l , k
Z i j l k + Z j i l k 1 , k , l , i , j = i + 1 , , n
j N Z i j l k h N 0 Z h i l k 0 , l , k , i
s i l + p ˜ i l + X t ( Y i l t + R i l t ) μ s j l M 1 ( 1 Z i j l k ) 0 , k , i , j , l , t
s 0 l s i l M 1 ( 1 Z 0 i l k ) 0 , k , i , l
s i 1 + P ˜ i 1 + X t ( Y i t 1 + R i t 1 ) μ s i 2 0 , i
s i l 0 , i , l
s 0 l = 0 , l
t T X t t m = 0
s i l + p ˜ i l μ t M 1 ( 1 Y i l t ) 0 , t , i , l
s i l + p ˜ i l μ ( t 1 ) + M 1 ( 1 Y i l t ) 0 , t , i , l
s i l + p ˜ i l μ ( t + 1 ) M 1 ( 1 R i l t ) 0 , t , i , l
s i l + p ˜ i l μ t + M 1 ( 1 R i l t ) 0 , t , i , l
Z i j l k { 0 , 1 } , k , i , j , l
Y i l t { 0 , 1 } , i , l , t
R i l t { 0 , 1 } , i , l , t
Equation (1) describes the objective function of this study as the weighted sum of the earliness and tardiness of jobs. Constraints (2) and (3) define the earliness of jobs, while constraints (4) and (5) define the tardiness of jobs. Constraint (6) ensures that each job is processed by exactly one machine. Constraint (7) ensures that the first job produced cannot exceed the total number of machines available. Constraint (8) ensures the feasibility. Constraint (9) ensures that the immediate precedent and subsequent processes of job j must be on the same machine. Constraints (10) and (11) calculate the start time of each job. If the end time of the previous job falls within a suspension shift or if the job’s processing time spans a suspension shift, the start time of the job must include the suspension shift time. Constraint (12) ensures that the start time of the second-stage job is later than the completion time of the same job in the previous stage. Constraint (13) states that all jobs arrive at time 0. Constraint (14) establishes that the earliest production time for each machine is at time 0. Constraint (15) guarantees that the total number of suspension shifts meets the required criterion. Constraints (16) and (17) determine the interval of the shift in which the job ends. Constraints (18) and (19) specify the shifts spanned by the job processing. Finally, constraints (20)–(22) indicate that the decision variables are binary integers.

4. Solution Method

As the current problem is an NP-hard problem with uncertain processing times, we adopt the DCE-BRKGA which has the ability to handle uncertainty by co-evolution of the solution populations and scenario populations. This is an innovative extension of the original genetic algorithm, addressing its limitation in handling uncertainty problems. The flowchart of the algorithm is presented in Figure 3.
First, the solution population X 1 and scenario population S 1 are initialized through random number encoding, with the current iteration number set to i = 1 , as described in Section 4.1. Each pair of individuals ( x , s ) formed from the two populations is decoded in Section 4.2 to calculate the objective function of the complete scheduling solution, where x X i and s S i . The calculation of the objective function for each pair ( x , s ) yields the matrix F ( x , s ) . Next, based on the objective function matrix F ( x , s ) , the fitness values of all solutions x X i and scenarios s S i are calculated and sorted according to their fitness values. Considering the differences in risk identification and management capabilities among decision-makers, two types of individual fitness evaluation methods are proposed. Detailed information is introduced in Section 4.3. Finally, all individuals in the population are divided into elite and non-elite individuals. The population is evolved by retaining elite individuals, mutating individuals, and crossing individuals into the next-generation population, as shown in Section 4.4.

4.1. Initialization

4.1.1. Solution Population Initialization

The solution is divided into two parts: a scheduling decision and a workshop suspension shift decision. Given a population size of P, the initial population is represented by a P × ( 2 n + t ) matrix X i , where i represents the iteration number, and each row of the matrix represents an individual in the population. For convenience, the part representing the scheduling decisions is denoted by a P × 2 n matrix A i , and the part representing the workshop suspension shift decision is denoted by a P × t matrix B i .
For the matrix A i , the j-th number of the e-th row is denoted as a e j i . Thus, the matrix A i can also be represented by each row a e i , A i = [ a 1 i , a 2 i , , a P i ] T . Each row a e i in the matrix contains 2n real numbers, as shown in Figure 4. For convenience, by treating the processing of each stage of n jobs as a separate job, these 2n real numbers represent the scheduling information for 2n jobs. Decoding these 2n real numbers yields the scheduling decision plan. Among them, jobs j { 1 , 2 , , n } represent the processing of n jobs in the first stage, and jobs j { n + 1 , n + 2 , , 2 n } represent the processing of n jobs in the second stage. That is, when j { 1 , 2 , , n } , a e j i and a e , j + n i respectively represent the encoding of the first- and second-stage processing information for the same job.
There are m 1 machines in the workshop used for processing jobs in the first stage, and there are m 2 machines for the second stage. Let M denote the set of all machines in the workshop: M = { 1 , 2 , , m 1 + m 2 } . The subset { 1 , 2 , , m 1 } represents the machines for the first stage, and the subset { m 1 + 1 , m 1 + 2 , , m 1 + m 2 } represents the machines for the second stage. To ensure the feasibility of the scheduling decision a e i , the encoding method used is shown in Equation (23).
a e j i ( 0 , m 1 ] , j 1 , 2 , , n ( m 1 , m 1 + m 2 ] , j n + 1 , n + 2 , , 2 n
For the matrix B i , the j-th number of the e-th row is denoted as b e j i . Thus, the matrix B i can be represented by each row b e i , B i = [ b 1 i , b 2 i , , b P i ] T . Each row b e i in the matrix contains t binary (0–1) variables. Since the number of suspension shifts is t m , to ensure the feasibility of the workshop suspension shift decision b e i , each row b e i needs to satisfy Equation (24).
j = 1 t b e j i = t m
Let us consider an example: suppose there are three jobs, with two machines in the first stage and three machines in the second stage. The production schedule spans four shifts, with one of these shifts requiring suspension. The population size P is five. The initial population X 1 is obtained by combining matrices A 1 and B 1 , as shown below:
X 1 = 0.1136 1.1037 3.0028 4.8345 2.6842 4.0088 0 0 1 0 0.8455 0.7480 2.5718 2.5088 3.9093 4.7653 1 0 0 0 1.0174 1.4762 4.8187 3.4526 2.9920 4.7296 0 1 0 0 1.6155 0.2605 4.1876 4.1897 4.9889 4.1250 1 0 0 0 0.2479 1.5545 3.7086 3.1028 4.0237 2.4210 0 0 0 1

4.1.2. Scenario Population Initialization

The initial population of the scenario is represented by a P × 2 n matrix S i . In matrix S i , the first n numbers in each row represent the processing times of n jobs in the first stage, the j-th number of the e-th row is denoted as p ˜ e j 1 i , the latter n numbers represent the processing times of n jobs in the second stage, and the ( j + n ) -th number of the e-th row is denoted as p ˜ e j 2 i . Due to limited information about the job processing times, it is only known that the processing time of job j in stage l has an upper bound U j l and a lower bound L j l . Therefore, the generated job processing times must follow a uniform distribution within the specified range, as shown in Equation (25).
p ˜ e j l i [ L j l , U j l ]

4.2. Decoding

Section 4.1 initializes the solution population and scenario population through encoding, and decoding these populations can yield a complete scheduling solution. When decoding the scheduling decision gene a e j i , the ceiling part a e j i represents the selected machine, and the decimal part a e j i a e j i in ascending order represents the sequence relationship on the machine. For the suspension shift decision variable b e j i , b e j i = 0 indicates that the operation can process normally in shift j, and b e j i = 1 indicates that the operation is suspended in shift j. The scenario gene p ˜ e j l i represents the processing time of job j, which does not need to be decoded and can be directly used. Therefore, decoding ( a , b , p ) yields the complete schedule solution.

4.3. Individual Evaluation

When facing uncertainty in processing times, due to the lack of specific probability distribution information, we can only rely on the upper and lower bounds of processing times to make decisions. Considering the differences in risk identification and management capabilities among decision-makers, their focus on performance evaluation metrics also varies. This paper introduces two evaluation metrics: the average criterion and the pessimistic criterion. These metrics aim to provide decision-makers with a more comprehensive information framework that better reflects the differences in risk preferences during the decision-making process. Through multidimensional evaluation, this approach enhances the adaptability and accuracy of decision-making in uncertain environments.

4.3.1. Average Criterion Individual Evaluation Method

The average criterion guides decision-making by considering the overall effect across all possible scenarios. It calculates the expected values of all potential outcomes to find the solution that performs best on average. This approach is especially suitable for risk-neutral decision-makers. Specifically, for the set of all possible processing time scenarios S and the set of all feasible solutions X, the objective function F ( x , s ) describes the weighted sum of earliness and tardiness for a particular solution x under a specific processing time scenario s. The fitness value fitness x of solution x can be determined by calculating the unweighted average of its objective function across all scenarios, as shown in Equation (26).
f i t n e s s x = s S F ( x , s ) | S |
Within the dual-space co-evolutionary algorithm framework, the evolution objective of the scenario population should focus on enhancing diversity. This diversity ensures that the algorithm maintains the robustness and effectiveness of its solutions when facing a wide range of changing environments. To make the scenario population more diverse, the fitness value of a scenario individual is transformed into its contribution to the population’s diversity. This is determined by calculating its distance from other scenarios. The method of calculating the distance is based on feature generation, where scenario individual s is mapped in a two-dimensional space based on two relevant features, as shown in Figure 5: one is the maximum value of the objective function revealed by the solution population X, max x X F ( x , s ) , and the other is the minimum value min x X F ( x , s ) . For each feature, scenarios representing extreme values are given high fitness values to encourage expanding the “space” occupied by the population. For the remaining scenarios, the two nearest scenarios are identified, and the product of their distances is calculated. The fitness value of a scenario is the maximum product of the distances along the two axes, giving preference to scenarios that fill gaps in the population space.
The pseudo-code for Algorithm 1, which calculates the scenario fitness value, is shown below.
In conclusion, the evaluation of individuals within a population using an average criterion is designed to identify the solution that exhibits the optimal average performance across various scenarios throughout the evolutionary process. The objective of evaluating scenarios is to assemble a set of scenarios that collectively exert the most diverse influence on the solutions, as illustrated in Figure 6. The star signs represent individuals in each population. Consequently, the evolution of solutions is framed as a minimization problem, where individuals x X from the solution population are ranked in non-decreasing order based on their fitness values. Conversely, the evolution of scenarios is considered a maximization problem, necessitating that individuals s S from the scenario population be ranked in non-increasing order of their fitness values.
Algorithm 1 Pseudo-code for the calculation of scenario fitness values
Input: Solution population X, scenario population S, and the | X | × | S | matrix F ( x , s )
Output: The fitness values of all individuals in the scenario population S
   1:
Initialization: Lists B and W of the | S | layer, two-axis distance values B d i s t s and W d i s t s of scenario s, and adaptability value f i t n e s s s of scenario s.
   2:
for j ← 1 to |S| do:
   3:
       s S j ;
   4:
       B best j min x X F ( x , s ) ;
   5:
       B id j s ;
   6:
       W worst j max s S F ( x , s ) ;
   7:
       W id j s ;
   8:
end for
   9:
sort B in ascending order by best
 10:
sort W in ascending order by worst
 11:
for j ← 2 to | S | 1  do:
 12:
       B d i s t B id j ( B best j + 1 B best j ) × ( B best j B best j 1 ) ;
 13:
       W d i s t W id j ( W worst j W worst j 1 ) × ( W worst j W worst j 1 ) ;
 14:
end for
 15:
for all s S  do:
 16:
      if  ( s = B id 1 ) or ( s = B id | S | ) or ( s = W id 1 ) or ( s = W id | S | )  then:
 17:
             f i t n e s s s + ;
 18:
      else:
 19:
             f i t n e s s s max ( B d i s t s , W d i s t s ) ;
 20:
      end if
 21:
end for

4.3.2. Pessimistic Criterion Individual Evaluation Method

In decision theory, the pessimistic criterion, also known as the mini-max criterion, is a conservative decision-making method that is applied under conditions of uncertainty. This criterion assumes that the worst-case scenario will occur and selects the decision option that results in the least possible loss or minimizes the worst-case outcome. This approach is particularly suitable for decision-makers who are highly sensitive to risk because it minimizes potential maximum loss by considering all possible negative outcomes and choosing the option that guarantees the relatively best result in the worst-case scenario. Therefore, the fitness value fitness x of solution x can be determined by calculating its worst-case objective function across all scenarios, as shown in Equation (27).
f i t n e s s x = m a x s S F ( x , s )
However, in the two-stage hybrid flow shop scheduling problem with suspension shifts in this paper, there are sequential constraints between processes. The completion time of each job cannot be expressed as a linear combination of the processing times { p 11 , p 21 , , p n 1 , p 12 , p 22 , , p n 2 } of all jobs. Therefore, it is impossible to derive effective information about scenarios that would result in extreme performance values for the solution, making the search space for scenarios the entire feasible domain. According to Lemmas 1 and 2, Equation (28) can be used as the fitness function for evaluating scenario s.
f i t n e s s s = m i n x X F ( x , s )
Lemma 1
([30]). x * X , s * S : F ( x * , s ) F ( x * , s * ) F ( x , s * ) , thus, x X , s S , min x X max s S F ( x , s ) = max s S min x X F ( x , s ) and vice versa.
Lemma 2
([30]). If ( x 1 , s 1 ) is a solution to the min x X max s S F ( x , s ) problem and ( x 2 , s 2 ) is a solution to the max s S min x X F ( x , s ) problem, then ( x 1 , s 2 ) is simultaneously a solution to both the min x X max s S F ( x , s ) and max s S min x X F ( x , s ) problems.
In summary, with the pessimistic criterion, solution evaluation focuses on finding the best solution in the worst-case scenario. Meanwhile, scenario evaluation aims to identify the scenario that causes the worst solution performance, as shown in Figure 7. Therefore, the evolution of solutions is a minimization problem, and the individuals x X in the solution population should be sorted in non-decreasing order of their fitness values. In contrast, the evolution of scenarios is a maximization problem, and the individuals s S in the scenario population should be sorted in non-increasing order of their fitness values.

4.4. Evolutionary Operations

In the DCE-BRKGA, an elite strategy, mutation, and crossover are used to generate new populations X i + 1 and S i + 1 . The elite strategy is used to retain the top P e individuals, while mutation and crossover are used to generate P P e new candidate individuals. These individuals form the new generation, which has a population of size P. This process is repeated until the termination condition is met.

4.4.1. Elite Strategy

Since the individuals in the population were already sorted by fitness values in Section 4.3, the top P e individuals are the elite individuals V e i , and the remaining individuals are the non-elite individuals Y e i . The P e elite individuals are retained in the next-generation population.

4.4.2. Mutation

We generate P m mutated individuals in the same way as the initialization in Section 4.1 and retain them in the next-generation population.

4.4.3. Crossover

After the mutation operation, crossover is performed to generate new individuals W e i for the next-generation population. To maintain the population size in each generation, P P e P m crossover individuals need to be generated. Crossover individuals W e i = [ w e 1 i + 1 , w e 2 i + 1 , , w e n i + 1 ] are generated, where each gene w e j i + 1 is selected from the elite individual gene v e j i and non-elite individual gene y e j i . This selection is controlled by the biased elite probability ρ [ 0 , 1 ] , which is predetermined. The gene w e j i + 1 can be obtained by Equation (29).
w e j i + 1 = v e j i i f r ( j ) ρ y e j i o t h e r w i s e
In Equation (29), r ( j ) is a random number between (0,1). If the generated random number is less than ρ , the elite individual gene is chosen; otherwise, the non-elite individual gene is chosen. When performing crossover on the solution population and scenario population, the scheduling decision part of each individual’s gene is crossed using the method, while the suspension shift decision part of the gene directly chooses the elite individual’s gene.

5. Computational Experiments, Results, and Discussion

Under conditions of processing uncertainty, simulation tests are conducted on the two-stage hybrid flow shop scheduling problem with suspension shifts. The parameters of the generated data instances are as follows: The number of machines per stage m { 2 , 3 , 4 , 5 , 6 , 7 } , with m 1 and m 2 both equal to m. The number of jobs n { 10 , 15 , 20 , 25 , 30 , 35 } . The lower bound of the processing time for job j in stage l follows a uniform distribution L j l [ L j l , γ 1 L j l ] , where L represents the minimum processing time per stage for a job; to ensure at least four days of workload, L is set to 96 m / n . The upper bound of the processing time for job j in stage l follows a uniform distribution U j l [ L j l , ( 1 + γ 2 ) L j l ] , with parameters γ 1 = 2 and γ 2 = 1.5 controlling the relative ranges of the lower and upper bounds. To test the impact of due dates, the due date d j of job j follows a uniform distribution ( μ μ R / 2 , μ + μ R / 2 ) , where μ = ( 1 T ) E [ C max ] and E [ C max ] = 1 2 m j = 1 n j = 1 2 L j l + U j l 2 . The parameter T = 0.3 is the tardiness factor, and R { 0.6 , 1.2 } is the relative range of the due date window. When R = 0.6 , the due dates are tight, and when R = 1.2 , the due dates are relaxed. The weights for job earliness and tardiness follow a discrete uniform distribution in the range [ a , b ] . The specific parameter values are shown in Table 2. There are 72 possible combinations. From these parameter combinations, 28 different scale combinations are selected to generate 28 data instances for experimental testing. The specific data instance numbers and corresponding parameter combinations can be seen in Table 3.
The performance of the BRKGA algorithm is highly sensitive to its parameters: population size P, elite individual proportion P e , mutant individual proportion P m , and elite gene inheritance probability ρ . Referencing the parameter choices recommended by Gonçalves [31], we set P e for the DCE-BRKGA dual-space parallel evolution to 0.2, P m is set to 0.1, and ρ is set to 0.7. The population size is set to P = 2 n × ( m 1 + m 2 ) , where n is the total number of jobs, and m 1 and m 2 are the numbers of machines in the first and second stages, respectively. The stopping criterion is based on 2 n non-improving iterations.
All experiments are conducted on a desktop computer with an AMD Ryzen 9 5900X 12-Core (3.70 GHz; AMD, Santa Clara, CA, USA) CPU and 32 GB RAM, using Python 3.9 and Gurobi 11.0.0 as the experimental tools.

5.1. Solution Evolution

To evaluate the algorithm’s ability to generate high-quality solutions, we can analyze the performance improvement during iterations and the performance differences across multiple runs. Figure 8 shows the iterative process for data Instance 11, illustrating the changes in the fitness values of the best solution in each generation, making it easy to observe the evolution and convergence. Despite fluctuations caused by changes in the scenario population during evolution, the fitness values under both evaluation criteria significantly improved and eventually stabilized. To quantify the performance improvement, we analyze the iteration process that generated the optimal solution. Using the scenario population from the final generation of that iteration, we calculate the initial solution’s fitness value f i n i t i a l and the final optimal solution’s fitness value f f i n a l . These values are used to compute the improvement during the iteration, as shown in Equation (30). Table 3 records the improvement for each data instance under different evaluation metrics, showing consistent improvement across both metrics, with an average improvement of about 72%.
D e g r e e o f i m p r o v e m e n t o f t h e s o l u t i o n = f f i n a l f i n i t i a l f i n i t i a l × 100 %
In this study, each data instance is run ten times to ensure result reliability, and we record the fitness values of the final solutions for each run. The lowest fitness value among these runs is selected as the optimal fitness value for each data instance. As shown in the “Optimal Fitness Value” column in Table 3 and Figure 9, the optimal fitness values obtained using the pessimistic criterion are generally higher than those obtained using the average criterion. This indicates that the pessimistic criterion tends to consider the worst-case scenarios, leading to relatively poorer fitness values.
To assess the algorithm’s stability and reliability, we analyze the coefficient of variation of the fitness values from ten runs. The “Coefficient of Variation” column in Table 3 and Figure 10 show average values of 12.8% and 11.0% for the two evaluation criteria, indicating some variability between runs due to the different scenario populations generated each time. Specifically, when the number of jobs n is 10, the coefficient of variation is unstable, sometimes exceeding 20%. This is likely because with fewer jobs, the processing time of each job becomes more critical, and minor adjustments to solutions can lead to significant changes in fitness values.
Table 3 provides a quantitative analysis of the average run times, revealing that run times are significantly affected by the scale of the data instances. As the number of jobs and machines increases, so does the run time. Additionally, when the due dates are more relaxed, the run time tends to increase. This may be because relaxed due dates imply ample capacity, requiring more suspension shifts to meet production needs. To further verify this observation, Table 3 also records the number of suspension shifts corresponding to the optimal solution. The data show that in instances with R = 1.2 , the optimal solutions generally require more suspension shifts than those with R = 0.6 , supporting the above findings.

5.2. Algorithm Comparison Experiments

To further verify the performance of the algorithm in terms of the quality of the generated solution, this paper compares the BRKGA with the Gurobi solver and the RKGA under deterministic scenarios.

5.2.1. Comparative Experiments with the BRKGA and Gurobi

Since this problem is NP-hard, Gurobi can only solve relatively small data instances. This paper designs 12 small data instances so that Gurobi can achieve optimal solutions within a reasonable time. The method for generating data instances is similar to that in Section 5, with two machines per stage and the number of jobs n { 4 , 6 , 8 , 10 , 12 } ; the other parameters are generated in the same way. Each data instance generates a fixed scenario within the processing time range. In this deterministic scenario, a comparative experiment is conducted between the BRKGA and Gurobi; we limit the suspension shifts to no more than one and compare the minimum objective values obtained by both methods.
As shown in Table 4, the final objective value obtained by the BRKGA is, on average, 7.08% higher than the optimal value obtained by Gurobi. In terms of the average run time, when n = 12 and m 1 = m 2 = 2 , Gurobi takes over an hour to solve the problem, while the BRKGA completes it within 8 s. This demonstrates that the BRKGA can effectively solve the problem in a much shorter time.

5.2.2. Comparative Experiments with the BRKGA and RKGA

This paper compares the optimal solutions generated by the RKGA [32] and the BRKGA under two different fitness criteria in a fixed scenario. Comparative experiments are conducted on 14 data instances, with the results shown in Table 5 and Figure 11. The average deviation of the final best value of the BRKGA from that of the RKGA is −49.7% under the average criterion and −50.4% under the pessimistic standard. Furthermore, Figure 11 indicates that, compared to the average criterion, the pessimistic criterion results in a smaller variance in the deviation of the BRKGA relative to the RKGA across the 14 data instances. Although the main advantage of the proposed heuristic method is that it generates scenarios simultaneously, which is not evaluated in this experiment, the results support that the BRKGA under a co-evolutionary mechanism yields better solutions than the RKGA. The BRKGA’s advantages become more apparent as the scale of the data instances increases.
In addition to comparing the optimal values of the two algorithms, their average run times were also compared, as shown in Table 6. It can be seen that under both evaluation criteria, the BRKGA’s average run time is longer. This is because the RKGA is more likely to get trapped in local optima, leading to early termination of iterations. This reflects the BRKGA’s advantage in balancing search depth and breadth. The BRKGA’s elite strategy and biased selection strategy effectively balance the search breadth (exploring new potential solutions) and depth (optimizing known solutions), ensuring that the algorithm neither gets trapped in local optima too early nor fails to effectively approach the global optimum.

5.3. Analysis Based on the EVPI and VSS Indicators

The DCE-BRKGA is a heuristic algorithm, so its results, including solutions and scenarios, are not guaranteed to be optimal. When evaluated using the average criterion, the problem is classified as a stochastic optimization problem. For such problems, the EVPI and VSS are used to assess the benefits of the stochastic optimization solutions obtained by the DCE-BRKGA under uncertainty. This evaluation helps quantify the potential value and effectiveness of the algorithm’s solutions when accounting for uncertainty.
Figure 12 illustrates the framework for calculating the EVPI and VSS. The EVPI is the difference between the optimal fitness value randomized adaptation (RA) obtained by the stochastic method and the “wait-and-see” (WS) value. The WS value represents the average performance metric of the optimal solutions for each scenario, assuming perfect information is available in advance. The DCE-BRKGA evolves based on two parallel BRKGAs, where solutions and scenarios co-evolve. To obtain comparable metrics, a similar heuristic algorithm, a single BRKGA, is used to generate the “wait-and-see” solutions. Unlike the two parallel BRKGAs, the single BRKGA evolves only the solution population, assuming fixed (deterministic) scenarios. The VSS compares the optimal fitness value RA obtained by the stochastic method with the expected value from using the expected solution (EEV) considering only the average scenario. A single-space BRKGA generates the expected result, where the processing time for the average scenario is derived from the mean of the processing times for all scenarios.
To further verify the potential value and efficiency of the DCE-BRKGA, the expected value of perfect information as a percentage ( E V P I % ) is used to evaluate the value of obtaining perfect information. This value is defined as the degree of deviation between the RA obtained by the DCE-BRKGA and the WS obtained by the “wait-and-see” method and is calculated as:
E V P I % = RA WS WS × 100 % ,
The value of the stochastic solution as a percentage ( V S S % ) represents the degree of deviation of the expected value with respect to the RA and is calculated as:
V S S % = EEV RA RA × 100 % ,
This study uses 28 data instances to calculate the EVPI and VSS, and the results are shown in Table 7 and Figure 13. For the EVPI, the results indicate that if uncertainty is eliminated, potential improvements of 2.4% to 20.3% can be expected, assuming the scenario population is representative and decisions are made. This value is very useful for manufacturers as it helps them understand the value of investing in better prediction methods.
Regarding the VSS, this metric directly measures the impact of using a stochastic method compared to a similar deterministic method. The results show that the consideration of uncertainty in the decision-making process can achieve additional value gains of 0.9% to 69.9%. This finding highlights the importance and potential benefits of incorporating uncertainty into decision-making.

5.4. Scenario Evolution

This section aims to evaluate the quality of the final scenario population obtained using the average criterion, focusing on diversity and representativeness. In Section 5.1, we discussed that the fluctuations in solution fitness values (see Figure 8) when using the average criterion are caused by the non-monotonic evolution of the scenario population. Due to strategies to increase the scenario population diversity, scenarios are ranked based on their differential impact on the solutions. This means that the addition or removal of a single individual can significantly change the fitness of another individual. However, in this study, the fitness of each scenario is quantified to guide the population towards certain features, with the focus being on the overall structure of the population rather than the fitness of individual scenarios. Therefore, when considering the quality of the scenario population, two key features are: diversity—the goal is to obtain scenarios that have different impacts on the solutions; representativeness—since the scenario population cannot cover all possible scenarios, it is essential to ensure that the selected scenarios reflect the performance of the solutions across all potential scenarios.

5.4.1. Diversity

The diversity of the scenario population relates to the calculation of the scenario fitness values introduced in Section 4.3.1. Figure 14 uses the previously introduced two-dimensional coordinate system, as shown in Figure 5, where each scenario in the population is mapped based on the worst and best fitness values it “causes” in the solution population. This figure compares the initial and final generations of the scenario population as mapped within the final solution population. Compared to the initial scenario population (blue triangles), the final generation (red circles) occupies a larger “space”, with greater distances between scenarios, indicating higher diversity. The population’s skewed distribution is because the two features used to map scenarios are correlated; higher worst fitness values tend to accompany higher best fitness values. Considering that the solution population tends to converge towards similarly well-performing solutions, we do not expect to see a single scenario simultaneously adversely affecting one solution while benefiting another in the final generation.
Based on this mapping, population diversity can be measured from two perspectives: the range occupied in the space and the dispersion among scenarios. To quantitatively assess the space and dispersion, this study records the extremum and standard deviations for the two features. The metrics for the initial and final generations of the 28 data instances are detailed in Table 8.
To compare the initial and final generations of the scenario population in terms of occupied space and dispersion, this study introduces additional evaluation metrics: the extremum deviation ratio (EDR) and the standard deviation ratio (SDR). The EDR measures the expansion of the final generation’s range on the worst- and best-fitness-value axes compared to the initial generation. The SDR measures the increase in dispersion of the final generation relative to the initial generation.
Table 9 and Figure 15 show that, on average, the final generation’s EDR on the worst-fitness-value axis is 1.58 times that of the initial generation, with the SDR 1.76 times greater than that of the initial generation. On the best-fitness-value axis, the EDR is 2.19 times greater and the SDR is 2.47 times greater than for the initial generation. These results indicate that population diversity has improved in both dimensions, with a more significant improvement in the best-fitness-value dimension.

5.4.2. Representativeness

Since the generated scenarios only have boundary information for processing times, it is difficult to assess the accuracy and representativeness of the generated scenarios. Nonetheless, we can measure them based on the precision. A representative scenario population should have a similar impact on the solutions. To test this, we evaluated the best solution found for each data instance using both the final generation’s P scenario from the same run and 10 P scenarios generated from ten runs. By calculating the fitness value increase percentage ( F V I P ), we compared the differences in fitness values for the best solutions from these two sets of scenarios, as shown in Equation (33), where S represents the set of P scenarios from the final generation of the same run, and A S represents the set of 10 P scenarios generated from ten runs.
F V I P = s A S F ( x , s ) / | A S | s S F ( x , s ) / | S | s S F ( x , s ) / | S |
Figure 16 shows the test results for 28 data instances. On average, the impact on fitness of the two scenario sets is similar, with a range of 0.2% to 8.7%. The table shows more negative values, indicating that the scenarios generated in the same run (S) tend to slightly underestimate the performance of the best solution. However, this is an average trend rather than a universal rule.

5.5. Decision Support

Ultimately, this method aims to support decision-makers by providing a range of high-quality solutions tailored to different risk preferences, supplemented by visual tools to demonstrate the potential impacts of uncertainty on these solutions. This section will delve into the outputs provided to decision-makers and the potential computational limitations.
The number of solutions obtainable using this method depends on the available time or computational resources. Even with limited resources, each solution’s fitness criteria can be run once to compare the best solutions generated under two different evaluation metrics. For example, Figure 17 shows the best results for Instance 24 under different evaluation criteria. The vertical axis represents the objective value of each optimal solution in each scenario. In this data instance, the solution under the pessimistic criterion performs better in most scenarios, while the solution under the average criterion performs better in a few scenarios. Besides comparing the objective values of different solutions, the solutions can also be converted into Gantt charts to visually display scheduling and workshop suspensions (see Figure 18 and Figure 19). This feature is crucial for applying the method in decision support systems. For ‘, decision-makers can adjust the Gantt chart to modify the final solution and test these new solutions within the generated scenario set. Additionally, decision-makers can include specific scenarios in the initial population. However, there must be certain limitations to ensure that the number of randomly generated individuals in the population meets the minimum requirements.

6. Conclusions

This study regards workshop suspension shifts as a tool to adjust capacity and ensure on-time delivery. We propose a two-stage hybrid flow shop scheduling model with suspension shifts, addressing the gap in integrated the optimization of workshop suspensions and scheduling. We design the DCE-BRKGA to solve the scheduling problem under uncertain processing times. This method is highly applicable and provides robust solutions without requiring decision-makers to define scenarios or their probability distributions, needing only the upper and lower bounds of uncertainty parameters. In terms of application, the study considers both average and pessimistic criteria, offering a comprehensive information framework for decision-makers and enhancing adaptability and accuracy in uncertain environments through visual comparisons of the solutions. We quantified the benefits by using the VSS and EVPI on 28 datasets. Compared to the average scenario, the VSS results show that the proposed algorithm achieves additional value gains ranging from 0.9% to 69.9%. Furthermore, the EVPI indicates that the algorithm could potentially improve outcomes by 2.4% to 20.3% after eliminating uncertainty. These results demonstrate that the DCE-BRKGA effectively provides robust solutions even in the absence of known processing time distributions.
In this work, we assume that all workshop suspension shifts have the same priority. However, if we prioritize suspension shifts during rest days to reduce overtime, the model could have broader practical applications. Additionally, our current approach sets the objective as M i n i N ( α i E i + β i T i ) , effectively balancing the conflicting objectives of minimizing earliness and tardiness through a weighted sum. While this method addresses these objectives within a single optimization framework, it does not fully explore the potential of multi-objective optimization, particularly in terms of generating a Pareto frontier of optimal solutions. Future work could focus on developing algorithms that explicitly consider Pareto-optimal solutions, offering a more comprehensive analysis of the trade-offs between multiple objectives. Moreover, optimizing the algorithm to reduce computation time remains crucial. Currently, evaluating the fitness of all individuals results in high computational costs, whereas sampling too few individuals increases the algorithm’s randomness. Therefore, effective sampling strategies and fitness evaluation are important areas for further exploration to enhance the algorithm’s efficiency and robustness.

Author Contributions

Conceptualization, L.H. and D.L.; Data curation, L.H.; Formal analysis, Z.H. and L.H.; Funding acquisition, D.L.; Investigation, Z.H. and L.H.; Methodology, L.H. and D.L.; Project administration, D.L.; Resources, D.L.; Software, Z.H. and L.H.; Supervision, D.L.; Validation, Z.H. and D.L.; Visualization, Z.H. and L.H.; Writing—original draft, Z.H. and L.H.; Writing—review and editing, Z.H. and D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by National Natural Science Foundation of China [grant No. 72171054] and the Fujian Provincial Natural Science Foundation [grant No. 2023J06015].

Data Availability Statement

The original data presented in the study are openly available in GitHub at https://github.com/nbzj/Data-Instances-of-Co-Evolutionary-Algorithm-for-THFSP-with-suspension-shifts.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DCE-BRKGAdual-space co-evolutionary biased random key genetic algorithm
BRKGAbiased random key genetic algorithm
EVPIexpected value of perfect information
E V P I % expected value of perfect information as a percentage
VSSvalue of the stochastic solution
V S S % value of the stochastic solution as a percentage
HFSPhybrid flow shop scheduling problem
THFSPtwo-stage hybrid flow shop scheduling problem
RKGArandom key genetic algorithm
RArandomized adaptation
WSwait-and-see
EEVexpected value using the expected solution
FVIPfitness value increase percentage
EDRextremum deviation ratio
SDRstandard deviation ratio

References

  1. Meng, L.; Zhang, C.; Shao, X.; Zhang, B.; Ren, Y.; Lin, W. More MILP models for hybrid flow shop scheduling problem and its extended problems. Int. J. Prod. Res. 2020, 58, 3905–3930. [Google Scholar] [CrossRef]
  2. Wang, S.; Liu, M. A genetic algorithm for two-stage no-wait hybrid flow shop scheduling problem. Comput. Oper. Res. 2013, 40, 1064–1075. [Google Scholar] [CrossRef]
  3. Oliveira, B.B.; Carravilla, M.A.; Oliveira, J.F.; Costa, A.M. A co-evolutionary matheuristic for the car rental capacity-pricing stochastic problem. Eur. J. Oper. Res. 2019, 276, 637–655. [Google Scholar] [CrossRef]
  4. Lin, R.; Wang, J.Q.; Oulamara, A. Online scheduling on parallel-batch machines with periodic availability constraints and job delivery. Omega 2023, 116, 102804. [Google Scholar] [CrossRef]
  5. Nguyen, A.H.G.; Sheen, G.J.; Yeh, Y. An approximation algorithm for the two identical parallel machine problem under machine availability constraints. J. Ind. Prod. Eng. 2023, 40, 54–67. [Google Scholar] [CrossRef]
  6. Yu, T.S.; Han, J.H. Scheduling proportionate flow shops with preventive machine maintenance. Int. J. Prod. Econ. 2021, 231, 107874. [Google Scholar] [CrossRef]
  7. Nicosia, G.; Detti, P.; Pacifici, A. Robust Job-Sequencing with an Uncertain Flexible Maintenance Activity. Comput. Ind. Eng. 2023, 185, 109610. [Google Scholar]
  8. Lee, C.Y.; Leon, V.J. Machine scheduling with a rate-modifying activity. Eur. J. Oper. Res. 2001, 128, 119–128. [Google Scholar] [CrossRef]
  9. Nourelfath, M.; Châtelet, E. Integrating production, inventory and maintenance planning for a parallel system with dependent components. Reliab. Eng. Syst. Saf. 2012, 101, 59–66. [Google Scholar] [CrossRef]
  10. Lu, Z.; Zhang, Y.; Han, X. Integrating run-based preventive maintenance into the capacitated lot sizing problem with reliability constraint. Int. J. Prod. Res. 2013, 51, 1379–1391. [Google Scholar] [CrossRef]
  11. Liu, Y.; Zhang, Q.; Ouyang, Z.; Huang, H.Z. Integrated production planning and preventive maintenance scheduling for synchronized parallel machines. Reliab. Eng. Syst. Saf. 2021, 215, 107869. [Google Scholar] [CrossRef]
  12. Zheng, X.; Zhou, S.; Xu, R.; Chen, H. Energy-efficient scheduling for multi-objective two-stage flow shop using a hybrid ant colony optimisation algorithm. Int. J. Prod. Res. 2020, 58, 4103–4120. [Google Scholar] [CrossRef]
  13. Shahgholi Zadeh, M.; Katebi, Y.; Doniavi, A. A heuristic model for dynamic flexible job shop scheduling problem considering variable processing times. Int. J. Prod. Res. 2019, 57, 3020–3035. [Google Scholar] [CrossRef]
  14. Framinan, J.M.; Fernandez-Viagas, V.; Perez-Gonzalez, P. Using real-time information to reschedule jobs in a flowshop with variable processing times. Comput. Ind. Eng. 2019, 129, 113–125. [Google Scholar] [CrossRef]
  15. Yue, Q.; Zhou, S. Due-window assignment scheduling problem with stochastic processing times. Eur. J. Oper. Res. 2021, 290, 453–468. [Google Scholar] [CrossRef]
  16. Ghaedy-Heidary, E.; Nejati, E.; Ghasemi, A.; Torabi, S.A. A simulation optimization framework to solve stochastic flexible job-shop scheduling problems—Case: Semiconductor manufacturing. Comput. Oper. Res. 2024, 163, 106508. [Google Scholar] [CrossRef]
  17. Liu, X.; Chu, F.; Zheng, F.; Chu, C.; Liu, M. Parallel machine scheduling with stochastic release times and processing times. Int. J. Prod. Res. 2021, 59, 6327–6346. [Google Scholar] [CrossRef]
  18. Lu, C.C.; Ying, K.C.; Lin, S.W. Robust single machine scheduling for minimizing total flow time in the presence of uncertain processing times. Comput. Ind. Eng. 2014, 74, 102–110. [Google Scholar] [CrossRef]
  19. Wang, S.; Cui, W. Approximation algorithms for the min-max regret identical parallel machine scheduling problem with outsourcing and uncertain processing time. Int. J. Prod. Res. 2021, 59, 4579–4592. [Google Scholar] [CrossRef]
  20. Xiao, S.; Wu, Z.; Dui, H. Resilience-Based Surrogate Robustness Measure and Optimization Method for Robust Job-Shop Scheduling. Mathematics 2022, 10, 4048. [Google Scholar] [CrossRef]
  21. Ali, O.; Abbas, Q.; Mahmood, K.; Bautista Thompson, E.; Arambarri, J.; Ashraf, I. Competitive Coevolution-Based Improved Phasor Particle Swarm Optimization Algorithm for Solving Continuous Problems. Mathematics 2023, 11, 4406. [Google Scholar] [CrossRef]
  22. Lei, H.; Wang, R.; Laporte, G. Solving a multi-objective dynamic stochastic districting and routing problem with a co-evolutionary algorithm. Comput. Oper. Res. 2016, 67, 12–24. [Google Scholar] [CrossRef]
  23. Zhao, F.; He, X.; Wang, L. A two-stage cooperative evolutionary algorithm with problem-specific knowledge for energy-efficient scheduling of no-wait flow-shop problem. IEEE Trans. Cybern. 2020, 51, 5291–5303. [Google Scholar] [CrossRef] [PubMed]
  24. Xiao, Q.z.; Zhong, J.; Feng, L.; Luo, L.; Lv, J. A cooperative coevolution hyper-heuristic framework for workflow scheduling problem. IEEE Trans. Serv. Comput. 2019, 15, 150–163. [Google Scholar] [CrossRef]
  25. Wang, Z.Y.; Pan, Q.K.; Gao, L.; Jing, X.L.; Sun, Q. A cooperative iterated greedy algorithm for the distributed flowshop group robust scheduling problem with uncertain processing times. Swarm Evol. Comput. 2023, 79, 101320. [Google Scholar] [CrossRef]
  26. Ming, F.; Gong, W.; Wang, L.; Lu, C. A tri-population based co-evolutionary framework for constrained multi-objective optimization problems. Swarm Evol. Comput. 2022, 70, 101055. [Google Scholar] [CrossRef]
  27. Gu, J.; Gu, M.; Cao, C.; Gu, X. A novel competitive co-evolutionary quantum genetic algorithm for stochastic job shop scheduling problem. Comput. Oper. Res. 2010, 37, 927–937. [Google Scholar] [CrossRef]
  28. Herrmann, J.W. A genetic algorithm for minimax optimization problems. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; Volume 2, pp. 1099–1103. [Google Scholar]
  29. Resende, M.G.; de Sousa, J.P.; Jensen, M.T. A new look at solving minimax problems with coevolutionary genetic algorithms. In Metaheuristics Computer Decision-Making; Springer: Boston, MA, USA, 2004; pp. 369–384. [Google Scholar]
  30. Jensen, M.T. Robust and Flexible Scheduling with Evolutionary Computation; Citeseer: Princeton, NJ, USA, 2001. [Google Scholar]
  31. He, X.; Pan, Q.K.; Gao, L.; Wang, L.; Suganthan, P.N. A greedy cooperative co-evolutionary algorithm with problem-specific knowledge for multiobjective flowshop group scheduling problems. IEEE Trans. Evol. Comput. 2021, 27, 430–444. [Google Scholar] [CrossRef]
  32. Suhaimi, N.; Nguyen, C.; Damodaran, P. Lagrangian approach to minimize makespan of non-identical parallel batch processing machines. Comput. Ind. Eng. 2016, 101, 295–302. [Google Scholar] [CrossRef]
Figure 1. Co-optimization benefits analysis.
Figure 1. Co-optimization benefits analysis.
Mathematics 12 02575 g001
Figure 2. Problem transformation flowchart.
Figure 2. Problem transformation flowchart.
Mathematics 12 02575 g002
Figure 3. DCE-BRKGA flowchart.
Figure 3. DCE-BRKGA flowchart.
Mathematics 12 02575 g003
Figure 4. Solution chromosomal fragments.
Figure 4. Solution chromosomal fragments.
Mathematics 12 02575 g004
Figure 5. Calculation of scenario population fitness values and distance quantification.
Figure 5. Calculation of scenario population fitness values and distance quantification.
Mathematics 12 02575 g005
Figure 6. Flowchart of co-evolution under the average criterion.
Figure 6. Flowchart of co-evolution under the average criterion.
Mathematics 12 02575 g006
Figure 7. Flowchart of co-evolution under the pessimistic criterion.
Figure 7. Flowchart of co-evolution under the pessimistic criterion.
Mathematics 12 02575 g007
Figure 8. Evolution of the solution’s fitness values (Instance 11, with the average criterion on the left and the pessimistic criterion on the right).
Figure 8. Evolution of the solution’s fitness values (Instance 11, with the average criterion on the left and the pessimistic criterion on the right).
Mathematics 12 02575 g008
Figure 9. Optimal fitness values for the average and the pessimistic criterion.
Figure 9. Optimal fitness values for the average and the pessimistic criterion.
Mathematics 12 02575 g009
Figure 10. Coefficients of variation for the average and pessimistic criterion.
Figure 10. Coefficients of variation for the average and pessimistic criterion.
Mathematics 12 02575 g010
Figure 11. Relative deviation of the final optimal values sought by the BRKGA and the RKGA.
Figure 11. Relative deviation of the final optimal values sought by the BRKGA and the RKGA.
Mathematics 12 02575 g011
Figure 12. Framework for calculating EVPI and VSS.
Figure 12. Framework for calculating EVPI and VSS.
Mathematics 12 02575 g012
Figure 13. Results of the EVPI and VSS calculations.
Figure 13. Results of the EVPI and VSS calculations.
Mathematics 12 02575 g013
Figure 14. First-generation scenario populations vs. last-generation scenario populations (Instance 11).
Figure 14. First-generation scenario populations vs. last-generation scenario populations (Instance 11).
Mathematics 12 02575 g014
Figure 15. EDR and SDR on two feature axes.
Figure 15. EDR and SDR on two feature axes.
Mathematics 12 02575 g015
Figure 16. Fitness value increase percentage.
Figure 16. Fitness value increase percentage.
Mathematics 12 02575 g016
Figure 17. Final generated objective value for optimal solution (Instance 24).
Figure 17. Final generated objective value for optimal solution (Instance 24).
Mathematics 12 02575 g017
Figure 18. Gantt chart of optimal solutions under the average criterion (Instance 24).
Figure 18. Gantt chart of optimal solutions under the average criterion (Instance 24).
Mathematics 12 02575 g018
Figure 19. Gantt chart of optimal solutions under the pessimistic criterion (Instance 24).
Figure 19. Gantt chart of optimal solutions under the pessimistic criterion (Instance 24).
Mathematics 12 02575 g019
Table 1. Mathematical notations.
Table 1. Mathematical notations.
NotationDescription
SStage set S = { 1 , 2 }
lStage symbols, l S
M l A set of machines in stage l
MAll machines set
kMachine symbol, k M l
NA set of jobs that does not contain virtual jobs, N = { 1 , 2 , , n }
N 0 A set of jobs containing virtual jobs, N = { 0 , 1 , 2 , , n } , For each
stage and machine, introduce a virtual job 0, which precedes the
first job on each machine
nTotal number of jobs
m l Total number of machines in stage l
h , i , j Job symbol, h , i N 0 , j N , h i j
α i Earliness weight of job i
β i Tardiness weight of job i
p ˜ i l Stochastic processing time of job i in stage l
d i Due date of job i
TShift set, T = { 1 , 2 , , t 0 }
tShift symbol, t T
μ Duration of each shift
t m Shifts requiring suspensions
s i l Start time of job i in stage l
E i Earliness of job i
T i Tardiness of job i
M 1 Maximum value
Table 2. Parameters of test data instances.
Table 2. Parameters of test data instances.
ParameterExperimental ValueParameter Type
Number of Machines
per Stage
2, 3, 4, 5, 6, 76
Number of Jobs10, 15, 20, 25, 30, 356
Lower Bound of
Processing Time
Uniform distribution of L j l [ ι j l , γ 1 L j l ] ,
where L = 96 m n , γ 1 = 2
1
Upper Bound of
Processing Time
Uniform distribution of U j t [ L j t , ( 1 + γ 2 ) L j t ] ,
where γ 2 = 1.5
1
Due date d j ( μ μ R 2 , μ + μ R 2 ) ,
μ = ( 1 T ) E [ C max ] ,
E [ C max ] = 1 2 m j = 1 n l = 1 n L j l + U j l 2 ,
T = 0.3 , R { 0.6 , 1.2 }
2
Earliness∖Tardiness
Weight
α j , β j follow a discrete uniform
distribution over the range [1, 9]
1
Table 3. Results of the analysis of the evolution of the solution space.
Table 3. Results of the analysis of the evolution of the solution space.
Average CriterionPessimistic Criterion
Data
Instance
Number
Scale
( n × m 1 × m 2 × R )
Degree of
Improvement
of Solution
Optimal
Fitness
Value
Coefficient
of
Variation
Average
Running
Time (s)
Optimal
Shifts for
Suspension
Degree of
Improvement
of Solution
Optimal
Fitness
Value
Coefficient
of
Variation
Average
Running
Time (s)
Optimal
Shifts for
Suspension
1 10 × 2 × 2 × 0.6 61.5%588.17.3%16.2059.2%61111.8%13.50
2 10 × 2 × 2 × 1.2 78.4%396.222.7%25.3270.2%542.114.9%24.70
3 10 × 3 × 3 × 0.6 59.7%824.416.6%24.2065.9%1042.811.2%20.20
4 10 × 3 × 3 × 1.2 75.3%515.227.4%39.8274.1%581.813.7%43.22
5 10 × 4 × 4 × 0.6 55.9%679.323.9%32.3181.2%787.928.4%36.51
6 10 × 4 × 4 × 1.2 87.7%306.326.5%41.9264.7%5709.4%43.83
7 15 × 2 × 2 × 0.6 57.7%1087.49.1%51.5165.7%1196.15.7%54.11
8 15 × 2 × 2 × 1.2 80.7%467.119.1%102.2178.3%55213.9%720
9 15 × 3 × 3 × 0.6 59.8%1125.613.6%92.9061.2%1257.18.9%106.91
10 15 × 3 × 3 × 1.2 79.2%619.712.7%134.1482.2%621.913.1%102.62
11 15 × 4 × 4 × 0.6 75.5%1026.219.2%124.6177.9%1087.721.2%105.60
12 15 × 4 × 4 × 1.2 81.8%657.412.2%164.3077.2%821.411.4%108.50
13 20 × 2 × 2 × 0.6 61.3%14096.8%139.4156.2%1518.64.5%143.90
14 20 × 2 × 2 × 1.2 87.3%63014.4%251.3180.5%797.714.1%214.83
15 20 × 3 × 3 × 0.6 69.1%1417.17.4%244.5156.6%1620.66.5%216.91
16 20 × 3 × 3 × 1.2 69.9%1123.111.7%266.3274.8%1364.411.6%286.20
17 20 × 4 × 4 × 0.6 58.3%176311.7%264.3071.3%1847.68.7%328.71
18 20 × 4 × 4 × 1.2 77.2%896.911.6%483.7278.0%967.916.9%362.52
19 25 × 5 × 5 × 0.6 66.1%2211.67.0%693.1168.6%2335.44.5%773.50
20 25 × 5 × 5 × 1.2 78.2%1270.511.2%936.5078.1%1498.513.5%937.62
21 25 × 6 × 6 × 0.6 72.8%1544.610.7%821.3071.9%1757.86.1%930.80
22 25 × 6 × 6 × 1.2 75.3%1437.611.0%1078079.1%1543.411.6%1145.72
23 30 × 6 × 6 × 0.6 74.9%2773.59.4%1580.7171.9%3071.87.1%1488.40
24 30 × 6 × 6 × 1.2 76.6%1183.37.0%2374.3282.9%1241.610.9%2113.51
25 30 × 7 × 7 × 0.6 67.7%2660.26.8%2142.7073.0%2669.19.6%1826.20
26 30 × 7 × 7 × 1.2 78.8%15918.4%2760.7178.6%1904.66.9%2582.41
27 35 × 7 × 7 × 0.6 68.8%26965.1%3074.1064.7%2974.75.5%3475.50
28 35 × 7 × 7 × 1.2 80.2%1515.89.2%6082.5181.9%1817.17.2%6274.42
Average72.0%1229.112.8%858.71.072.4%1378.711.0%851.20.9
Table 4. The BRKGA and Gurobi solve for the optimal value and average run time of the data instance.
Table 4. The BRKGA and Gurobi solve for the optimal value and average run time of the data instance.
ScaleOptimal SolutionAverage Run Time
( n × m 1 × m 2 × R ) GurobiBRKGADeviationGurobiBRKGA
4 × 2 × 2 × 0.6 9609791.98%0.020.18
4 × 2 × 2 × 1.2 7197849.04%0.030.18
6 × 2 × 2 × 0.6 4414675.90%0.360.71
6 × 2 × 2 × 1.2 3383627.10%0.340.70
8 × 2 × 2 × 0.6 8949657.94%1.292.05
8 × 2 × 2 × 1.2 5175649.09%2.061.73
10 × 2 × 2 × 0.6 9179988.83%242.803.55
10 × 2 × 2 × 1.2 8378987.29%283.022.96
12 × 2 × 2 × 0.6 4985214.62%3864.327.60
12 × 2 × 2 × 1.2 3984349.05%4510.707.87
Average651.9697.27.08%890.492.75
Table 5. Optimization of BRKGA and RKGA for solving data instances under different evaluation criteria.
Table 5. Optimization of BRKGA and RKGA for solving data instances under different evaluation criteria.
InstanceScaleAverage CriterionPessimistic Criterion
Number ( n × m 1 × m 2 × R ) RKGABRKGADeviationRKGABRKGADeviation
1 10 × 2 × 2 × 0.6 764.5597.6−21.8%840.3623.4−25.8%
2 10 × 2 × 2 × 1.2 905.6509.3−43.8%1106.9589.5−46.7%
3 10 × 3 × 3 × 0.6 1265.2869.0−31.3%1647.8854.2−48.2%
4 10 × 3 × 3 × 1.2 1041.6592.4−43.1%1338.0740.6−44.6%
5 10 × 4 × 4 × 0.6 1099.0642.0−41.6%1785.4811.7−54.5%
6 10 × 4 × 4 × 1.2 1022.9445.6−56.4%1381.2447.9−67.6%
7 15 × 2 × 2 × 0.6 2032.61145.5−43.6%1887.21259.3−33.3%
8 15 × 2 × 2 × 1.2 1337.8564.7−57.8%1629.2659.6−59.5%
9 15 × 3 × 3 × 0.6 2633.01143.3−56.6%2557.01315.8−48.5%
10 15 × 3 × 3 × 1.2 1947.5594.7−69.5%1751.7750.8−57.1%
11 15 × 4 × 4 × 0.6 2544.31194.8−53.0%2777.71258.7−54.7%
12 15 × 4 × 4 × 1.2 2068.5692.9−66.5%1513.9775.3−48.8%
13 20 × 2 × 2 × 0.6 2551.51483.0−41.9%2988.41582.8−47.0%
14 20 × 2 × 2 × 1.2 2212.1701.3−68.3%2502.0752.0−69.9%
Average1673.3798.3−49.7%1836.2887.3−50.4%
Table 6. Average run times of BRKGA and RKGA solving data instances under different evaluation criteria.
Table 6. Average run times of BRKGA and RKGA solving data instances under different evaluation criteria.
InstanceScaleAverage CriterionPessimistic Criterion
Number ( n × m 1 × m 2 × R ) RKGABRKGARKGABRKGA
1 10 × 2 × 2 × 0.6 7.611.66.611.9
2 10 × 2 × 2 × 1.2 1016.29.121.8
3 10 × 3 × 3 × 0.6 10.820.812.820.3
4 10 × 3 × 3 × 1.2 13.428.915.326.3
5 10 × 4 × 4 × 0.6 13.422.711.835.8
6 10 × 4 × 4 × 1.2 16.337.118.938.3
7 15 × 2 × 2 × 0.6 20.164.424.847.7
8 15 × 2 × 2 × 1.2 32.171.330.6121.1
9 15 × 3 × 3 × 0.6 31.482.530.788.9
10 15 × 3 × 3 × 1.2 43.3101.141.1101.9
11 15 × 4 × 4 × 0.6 54.2151.340.3112.9
12 15 × 4 × 4 × 1.2 47.2210.449.9115
13 20 × 2 × 2 × 0.6 63173.947.5150.2
14 20 × 2 × 2 × 1.2 75.7196.963.7211.3
Average31.384.928.878.8
Table 7. Calculation of EVPI and VSS.
Table 7. Calculation of EVPI and VSS.
Instance
Number
RAWSEEVEVPI EVPI % VSS VSS %
1588.1559.3631.628.85.1%43.57.4%
2396.2386.9598.99.32.4%202.751.2%
3824.4778.21152.346.25.9%327.939.8%
4515.2470.2616.145.09.6%100.919.6%
5679.3595.9965.983.414.0%286.642.2%
6306.3272.6324.433.712.4%18.15.9%
71087.41012.21120.475.27.4%33.03.0%
8467.1424.2825.842.910.1%358.776.8%
91125.61036.11486.089.58.6%360.432.0%
10619.7553.1755.966.612.0%136.222.0%
111026.2885.81190.9140.415.9%164.716.0%
12657.4582.51117.174.912.9%459.769.9%
1314091366.61472.742.43.1%63.74.5%
14630523.7772.2106.320.3%142.222.6%
151417.11336.21550.180.96.1%133.09.4%
161123.1990.51296.0132.613.4%172.915.4%
1717631507.11791.7255.917.0%28.71.6%
18896.9789.1904.8107.813.7%7.90.9%
192211.62111.12378.5100.54.8%166.97.5%
201270.51222.21380.748.34.0%110.28.7%
211544.61487.01877.457.63.9%332.821.5%
221437.61293.31694.0144.311.2%256.417.8%
232773.52568.63353.7204.98.0%580.220.9%
241183.31133.21370.650.14.4%187.315.8%
252660.22478.52816.5181.77.3%156.35.9%
2615911494.11940.696.96.5%349.622.0%
2726962605.93066.290.13.5%370.213.7%
281515.81462.32082.753.53.7%566.937.4%
Table 8. Calculation of scenario population diversity metrics.
Table 8. Calculation of scenario population diversity metrics.
Primary Scenario PopulationLast Scenario Population
InstanceScaleFeature of Worst
Fitness Value
Feature of Best
Fitness Value
Feature of Worst
Fitness Value
Feature of Best
Fitness Value
Number ( n × m 1 × m 2 × R ) MaxMin σ MaxMin σ MaxMin σ MaxMin σ
1 10 × 2 × 2 × 0.6 152910281158356406915531039207908574143
2 10 × 2 × 2 × 1.2 2725235610578039310627462374114797391133
3 10 × 3 × 3 × 0.6 265718422451126743113265217702371205631157
4 10 × 3 × 3 × 1.2 256418102507754667426361827193860417120
5 10 × 4 × 4 × 0.6 3297286212992066179327727411721015573144
6 10 × 4 × 4 × 1.2 21301836725962681382307199287699222164
7 15 × 2 × 2 × 0.6 216118149112531058100232418551511484974166
8 15 × 2 × 2 × 1.2 384336017655444610942383634192885421143
9 15 × 3 × 3 × 0.6 321627021421458995108363925023431741946239
10 15 × 3 × 3 × 1.2 24192138747556211122424205210882757772
11 15 × 4 × 4 × 0.6 25202157901211873205268421441751447775207
12 15 × 4 × 4 × 1.2 14551073116830452146178610432231406377338
13 20 × 2 × 2 × 0.6 4624406916415521421905037401928417571334134
14 20 × 2 × 2 × 1.2 131510267879650510015609711741105475206
15 20 × 3 × 3 × 0.6 3260278914816141340693552270224918051223135
16 20 × 3 × 3 × 1.2 4320362318514421025106430635561921801843241
17 20 × 4 × 4 × 0.6 68075883216201816301136976575231423761377238
18 20 × 4 × 4 × 1.2 1992167581100674074250516862401462693206
19 25 × 5 × 5 × 0.6 4598394114724362129795023382131928531909226
20 25 × 5 × 5 × 1.2 3975328116916751072138411831572792194854393
21 25 × 6 × 6 × 0.6 34302737149192814421003581247528921191241234
22 25 × 6 × 6 × 1.2 78636426323173512041098671628974820861048288
23 30 × 6 × 6 × 0.6 70035792286312626571087651535761434792295329
24 30 × 6 × 6 × 1.2 442035921741535995112481435413121892938249
25 30 × 7 × 7 × 0.6 70305722342327224702057852477479837591945479
26 30 × 7 × 7 × 1.2 58054666312193113531465950445734524171202303
27 35 × 7 × 7 × 0.6 4734409211728582502904935403128331872468222
28 35 × 7 × 7 × 1.2 57845128168174012771005935509624221111071275
Table 9. Calculation of the EDR and SDR on the two feature axes.
Table 9. Calculation of the EDR and SDR on the two feature axes.
InstanceScaleFeature of Worst
Fitness Value
Feature of Best
Fitness Value
Number ( n × m 1 × m 2 × R ) EDRSDREDRSDR
1 10 × 2 × 2 × 0.6 1.031.801.712.44
2 10 × 2 × 2 × 1.2 1.011.091.051.19
3 10 × 3 × 3 × 0.6 1.080.971.501.31
4 10 × 3 × 3 × 1.2 1.070.771.431.33
5 10 × 4 × 4 × 0.6 1.231.331.711.74
6 10 × 4 × 4 × 1.2 1.071.211.462.02
7 15 × 2 × 2 × 0.6 1.351.662.613.05
8 15 × 2 × 2 × 1.2 2.492.524.315.62
9 15 × 3 × 3 × 0.6 2.212.411.721.81
10 15 × 3 × 3 × 1.2 1.321.471.871.76
11 15 × 4 × 4 × 0.6 1.491.941.982.40
12 15 × 4 × 4 × 1.2 1.941.932.723.60
13 20 × 2 × 2 × 0.6 1.831.733.233.69
14 20 × 2 × 2 × 1.2 2.042.222.172.64
15 20 × 3 × 3 × 0.6 1.811.682.131.97
16 20 × 3 × 3 × 1.2 1.081.042.292.27
17 20 × 4 × 4 × 0.6 1.331.462.572.12
18 20 × 4 × 4 × 1.2 2.582.962.892.80
19 25 × 5 × 5 × 0.6 1.832.173.082.86
20 25 × 5 × 5 × 1.2 1.381.652.222.85
21 25 × 6 × 6 × 0.6 1.601.941.812.34
22 25 × 6 × 6 × 1.2 1.662.321.952.65
23 30 × 6 × 6 × 0.6 1.892.152.533.03
24 30 × 6 × 6 × 1.2 1.541.791.772.22
25 30 × 7 × 7 × 0.6 2.352.342.262.34
26 30 × 7 × 7 × 1.2 1.311.112.102.07
27 35 × 7 × 7 × 0.6 1.412.422.022.47
28 35 × 7 × 7 × 1.2 1.281.442.252.74
Average1.581.762.192.47
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, Z.; Huang, L.; Li, D. Co-Evolutionary Algorithm for Two-Stage Hybrid Flow Shop Scheduling Problem with Suspension Shifts. Mathematics 2024, 12, 2575. https://doi.org/10.3390/math12162575

AMA Style

Huang Z, Huang L, Li D. Co-Evolutionary Algorithm for Two-Stage Hybrid Flow Shop Scheduling Problem with Suspension Shifts. Mathematics. 2024; 12(16):2575. https://doi.org/10.3390/math12162575

Chicago/Turabian Style

Huang, Zhijie, Lin Huang, and Debiao Li. 2024. "Co-Evolutionary Algorithm for Two-Stage Hybrid Flow Shop Scheduling Problem with Suspension Shifts" Mathematics 12, no. 16: 2575. https://doi.org/10.3390/math12162575

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop