Next Article in Journal
Research on the Synergistic Evolution of Comprehensive Transportation Network System in the Yellow River Basin Aimed at High-Quality Development
Previous Article in Journal
On the Positive Role of Noise and Error in Complex Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Flow Shop Scheduling Method Based on Dual BP Neural Networks with Multi-Layer Topology Feature Parameters

1
Jinan Vocational College, Jinan 250002, China
2
Key Laboratory of Road Construction Technology and Equipment of MOE, Chang’an University, Xi’an 710064, China
3
Xi’an Electronic Engineering Research Institute, Xi’an 710100, China
*
Author to whom correspondence should be addressed.
Systems 2024, 12(9), 339; https://doi.org/10.3390/systems12090339 (registering DOI)
Submission received: 19 July 2024 / Revised: 24 August 2024 / Accepted: 28 August 2024 / Published: 1 September 2024

Abstract

:
Nowadays, the focus of flow shops is the adoption of customized demand in the context of service-oriented manufacturing. Since production tasks are often characterized by multi-variety, low volume, and a short lead time, it becomes an indispensable factor to include supporting logistics in practical scheduling decisions to reflect the frequent transport of jobs between resources. Motivated by the above background, a hybrid method based on dual back propagation (BP) neural networks is proposed to meet the real-time scheduling requirements with the aim of integrating production and transport activities. First, according to different resource attributes, the hierarchical structure of a flow shop is divided into three layers, respectively: the operation task layer, the job logistics layer, and the production resource layer. Based on the process logic relationships between intra-layer and inter-layer elements, an operation task–logistics–resource supernetwork model is established. Secondly, a dual BP neural network scheduling algorithm is designed for determining an operations sequence involving the transport time. The neural network 1 is used for the initial classification of operation tasks’ priority; and the neural network 2 is used for the sorting of conflicting tasks in the same priority, which can effectively reduce the amount of computational time and dramatically accelerate the solution speed. Finally, the effectiveness of the proposed method is verified by comparing the completion time and computational time for different examples. The numerical simulation results show that with the increase in problem scale, the solution ability of the traditional method gradually deteriorates, while the dual BP neural network has a stable performance and fast computational time.

1. Introduction

The development of artificial intelligence has promoted the digitalization, networking, and intelligence development of the manufacturing industry towards Industry 4.0 [1]. With this background, the production and organization mode of the flow shop has changed greatly [2,3]. On the one hand, the Industrial Internet of Things (IIoT) and edge computing technologies enable the real-time collection and rapid processing of underlying data to meet the requirements of high-quality and efficient production. On the other hand, in the face of fierce market competition, service-orientated manufacturing modes such as mass customization make the characteristics of multi-variety, low volume, and a short lead time particularly prominent. Therefore, numerous operation tasks increase the problem size of the computational scheduling. In a typical flow shop, raw materials are transported by an AGV (Automated Guided Vehicle) from a warehouse to corresponding machines for machining, and then again transported by an AGV to complete the subsequent operations in various resources. The delayed arrival of jobs can result in ineffective waiting at subsequent operations such as in following machines and finally end with extended production cycles. Meanwhile, the frequent transport of jobs between machines makes supporting logistics an indispensable factor to be included in practical scheduling decisions. Effective collaboration between multi-stage production and supporting logistics will prevent potential problems and thereby realize the scheduling of operations on a flow floor [4,5].
With constraints such as the frequent transport of jobs, insertion of urgent orders, and process sequence, it becomes necessary but also very challenging to quickly solve multi-stage production and supporting logistics scheduling problems in real-time in a digital-twin flow shop. In recent years, scholars have proposed the use of a complex network theory to solve issues, such as collaborative modeling problems [6], due to its advantages of describing the two-dimensional topology structure of complex manufacturing systems and parameters such as the node degree, clustering coefficient, and redundancy, and thereby the possibility of deriving heuristic information for scheduling rules. However, the above complex network-based models are built based on jobs or operations, but they lack the description of heterogeneous features and correlations among various elements (machine tools, AGVs, jobs). In this case, a supernetwork model consisting of multiple layers of complex networks becomes interesting for representing the multi-layer and multi-attribute characteristics of production logistics in a flow shop [7]. In addition, numerous experiments and operational practices have demonstrated that optimal scheduling results have similarities in machine selection and operation sequencing when similar scheduling problems have been repeatedly solved, while traditional flow shop scheduling algorithms often ignore the historical scheduling data and decisions [8,9]. With the deployment of IIoT and edge computing, large amounts of real-time and historical data can be collected on the flow shop floor to provide inputs for machine learning to efficiently solve real-time scheduling decisions for subsequent jobs.
Motivated by the above background, this paper adopts the supernetwork theory to describe the production and logistics of digital-twin flow shops. As a result, machines, jobs, and buffer sites are mapped as nodes; and the inter/intra-layer process relationships between nodes are mapped as superedges, such as process sequence relationships and machine conflict relationships. Meanwhile, a dual back propagation (BP) neural network is used to establish a scheduling model aiming at improving solving efficiency. The main contributions of this paper are as follows:
(1)
An operation task–logistics–resource supernetwork model was introduced to describe the hierarchical and heterogeneous relationships between the production and logistics nodes in a flow shop. The topological feature parameters of this model are extracted and used as inputs for the scheduling algorithm.
(2)
A dual BP neural network scheduler was designed to enable the flow shop to make rapid decisions based on the historical optimal scheduling scheme. Specifically, the neural network 1 is used to prioritize operation tasks and generate a priority queue, while the neural network 2 is used to resolve conflicts within the priority queue to form the final scheduling decision.
The rest of the paper is organized as follows. Section 2 reviews the related literature. Section 3 describes the problem of hybrid scheduling on the flow shop. Section 4 deduces the supernetwork modeling for flow shop production logistics. Section 5 establishes a neural network scheduler. Section 6 presents the flow shop scheduling case and its solutions. The conclusions are given in Section 7.

2. Literature Review

The flow shop scheduling problem is a popular research topic in both academia and industry. Current scheduling solution methods mainly include mathematical models, scheduling rules, and meta-heuristic and machine learning. Scheduling methods based on classic mathematical models include branch-and-bound [10] and Lagrangian relaxation [11], among others, which can derive the optimal solution, but the solution often takes a substantial amount of computational time. Rule-based scheduling, e.g., SPT (Shortest Machining time), see [12], EDF (Earliest Deadline First), see [13], LPT (Longest Machining time), see [14], is simple and fast in the solution process, but it has poor generalization ability and tends to be applicable to specific scheduling scenarios [15]. Therefore, meta-heuristic scheduling methods, such as genetic algorithms [16], particle swarm algorithms [17], ant colony algorithms [18], and the frog-leaping algorithm [19], have attracted extensive attentions over the past decades for their ability to find near-optimal solutions through iterative searches within the feasible solution domain. While these algorithms are effective at obtaining optimal solutions, they often require considerable time to solve large-scale scheduling problems.
In recent years, machine learning has demonstrated significant advantages in addressing large-scale scheduling problems [20]. On the one hand, because neural networks can effectively learn from historical data, the computational time required to solve a problem can be significantly reduced, enabling rapid solutions. On the other hand, neural network models can be trained to address other scheduling problems of a similar type and scale. For example, ref. [21] adopted an improved Deep Q-Network (DQN) to implement a scheduling scheme for establishing a robot-driven sanding processing line, aiming to minimize the total delay of multiple parallel service streams. Ref. [22] developed a neural network-based job shop scheduling model that evaluated different possible scheduling schemes based on internal or external constraints. Ref. [23] established a dynamic flexible workshop scheduling problem (DFJSP) model with the goals of maximum completion time and robustness, and used a two-stage algorithm based on convolutional neural networks to solve it. Ref. [24] proposed a multi-layer neural network model for solving the flow shop scheduling problem, presenting the optimal sequence of jobs with the objective of minimizing the makespan. Ref. [25] designed state features, actions, and reward functions by using reinforcement learning algorithms to solve the flow shop scheduling problem, minimizing the completion time. Ref. [26] summarizes the architecture and process of training deep reinforcement learning scheduling models and applying result scheduling solvers. It is widely acknowledged that machine learning can achieve significant improvements in efficiency; however, challenges remain in selecting appropriate input parameters. For instance, operation tasks typically include only basic input parameters, such as the processing time and remaining time, while neglecting features like the correlation between multi-stage production and supporting logistics. Consequently, the potential benefits of applying machine learning to complex scheduling problems remain underexplored. This paper, therefore, employs supernetwork theory with a multi-layer structure to model multi-stage production and supporting logistics in a flow shop. The topological parameters of this model are then incorporated into the input parameters of a neural network to inform practical scheduling decisions. Thus, this study introduces a novel approach to applying machine learning in scheduling problems.

3. Problem Description and Mathematical Model

In this paper, the flow shop scheduling problem integrated with transport times is investigated. Specifically, there are n jobs to be processed on m machines, each job with j operations, and each operation is realized on a specified machine. The scheduling decision is to determine the sequence of the operation and start time on each machine to obtain the optimal scheduling performance. The flow shop contains various jobs, different machines, buffer sites, and AGVs (Automatic Guided Vehicles). According to the MES (Manufacturing Execution System), the operation sequences of each job can be extracted, and the machines corresponding to operations are defined. The following assumptions are made:
(1)
All jobs and machines are available at time 0;
(2)
Each machine can only process one operation at a time;
(3)
Each job can only be processed on one machine at the same time;
(4)
Resource interruption can be ignored;
(5)
There are sequence constraints between the different operation tasks of a job, and a job needs to go through predefined operation tasks, which can be realized by dedicated machines;
(6)
An AGV is sufficient to complete the transport task;
(7)
Transport time dominates; therefore, the loading and unloading time can be ignored.
Table 1 shows the notations to be used for the model.
Considering minimizing the maximum completion time (makespan), the mathematical model is expressed as follows:
min   F T max = max F T i ,   i = 1 , 2 , , n
s . t .   k = 1 m Z i j k = 1 , i , j = 1 , 2 , , n
i = 1 n j = 1 n i k = 1 m B i j g h k 1
g = 1 n h = 1 n g k = 1 m B i j g h k 1 , i , j = 1 , 2 , , n
E T i j k = S T i j k + P T i j k × Z i j k , i , j = 1 , 2 , , n
E T i j k E T g h k + P T i j k L × ( 1 B i j g h k ) , i , j = 1 , 2 , , n
S T i j k = E T i j 1 m + T T k m E T i j k < E T i j 1 m + T T k m E T i j k E T i j k > E T i j 1 m + T T k m
where Equation (1) is the optimization objective; Equation (2) ensures that each operation can only be processed on one machine; Equations (3) and (4) indicate that each operation has at most one subsequent or preceding operation; Equation (5) indicates that the completion time of the operation is equal to the start time plus the processing time; Equation (6) ensures that the machine cannot process multiple jobs at the same time; and Equation (7) indicates that if the available processing time of the machine where the operation is located is less than the transport end time of the previous operation, it is subject to the transport time constraint. Otherwise, the operation is constrained by the resources of the current machine.

4. Supernetwork Modeling for Flow Shop Production Logistics

Selecting the appropriate input parameters is one critical problem in improving the performance of machine learning-based scheduling models. Currently, the commonly used parameters, such as the processing time and remaining time, often lead to incomplete scheduling scheme information. With this background, the supernetwork theory based on a multi-layer complex network is used to describe the flow shop production and logistics settings.

4.1. Analysis of Flow Shop Production Logistics Elements

This section analyzes the flow shop production and logistics elements from the network perspectives of the “node” and “edge”. From the perspective of the “node”, the flow shop elements include machines, buffer sites, and jobs. From the perspective of the “edge”, the links between nodes include the processing relationship and the transport relationship. Specifically, the processing relationship refers to the multi-stage operations that convert raw materials into finished products [27], and the transport relationship refers to the jobs’ movement between two nodes (machines or buffer site) by using AGVs. Therefore, jobs are the core carrier of edges in the flow shop production logistics.

4.2. Supernetwork Model Construction and Feature Extraction

4.2.1. Supernetwork Model Construction

The supernetwork based on a multi-layer complex network is a collection of multiple single-layer subnets. Therefore, it is essential to analyze the correlation relationship between different sub-networks and map them into superedges. As shown in Figure 1, we establish an operation task–logistics–resource supernetwork model, and each sub-network is abstracted into a weighted directed complex network G = {V, E, w}.
The supernetwork model includes three parts: operation tasks, job logistics, and production resources. Each part includes nodes, edges between nodes on the same layer, and edges between nodes on different layers, and together these elements describe the multi-layer networked relationships of flow shop production logistics.
The specific description of each sub-network model is shown in Table 2. GR reflects the logical layout of machines and their buffer sites in a flow shop; GT reflects the material handling based on the operation tasks; and Gp reflects the relationships between machining tasks and transport tasks.
Based on the above three sub-networks models, the operation task–logistics–resource supernetwork model can be represented by triples
G = { V ,   E ,   w } ,
G = G R G T G P , E = E R E T E P E R T E T P
where ERT and ETP, respectively, represent the inter-layer connected edges between sub-networks GR and GT and between sub-networks GT and GP, which are called superedges. The meanings are specified as follows:
(1)
The superedge between sub-networks Gp and GT is EPT, which represents the mapping of nodes in the operation task layer sub-network to the machines used by the operation task.
(2)
The superedge between sub-networks GT and GR is ERT, which represents the mapping between the nodes in the job logistics layer sub-network (that is, the machines used by the operation tasks involved in the operation task layer network nodes) and the machine nodes in the actual flow shop. Here, the machining and transport operations of operation tasks are separated into two layers of networks, which makes the machining and transport operations easier to analyze. It is also one of the major advantages of using a supernetwork to describe complex systems.

4.2.2. Supernetwork Feature Knowledge Extraction

The objective of flow shop production logistics is to minimize the maximum completion time. There is a mapping relationship between the supernetwork characteristics and the scheduling objectives, as explained as follows:
(1)
The job operation number Ni can be determined by the node sequence of the operation tasks layer network, which can be directly used as input information for subsequent neural networks.
(2)
According to the network edge’s definition of the operation tasks layer, the edge weight wp between the current operation task node and the next operation task represents the time for completing the current operation task.
(3)
The out-degree of network nodes can be obtained from the job logistics layer network. The out-degree OLDP of nodes in the operation tasks logistics layer network represents the time of logistics transport between nodes due to operation tasks.
(4)
The high-order out-degree HODP of the operation tasks layer network reflects the sum of processing times for the next operation tasks in the current process, i.e., the sum of the remaining processing times.
(5)
The node degree Dp of the operation tasks layer network: The node degree of the operation task layer reflects the number of tasks associated with the current operation task and the degree of correlation between the task node and other nodes in the network. This attribute is one of the most important topology attributes in the supernetwork, and a large value of Dp indicates the high impact of the task node.
(6)
The node degree DR of the production resource layer network: the node degree of machines reflects the number of resources associated with current machines, and it reflects the importance of the machines in the production resources network.
(7)
The node clustering coefficient CR of the production resources layer network: the clustering coefficient of machine nodes affects the propagation dynamics on the network, and it reflects the importance of machines’ connection in the production resources layer network.
(8)
The comprehensive loss LR of machines can affect the normal running life and failure probability, and it affects the completion time of the operation tasks. Therefore, the comprehensive loss is added as the attribute to characterize the fault of machines. The combined loss is equal to the product of the processing time undertaken by the machine and the loss per unit of time.

5. Establishment of Neural Network Scheduler

The topological features of the supernetwork model in Section 4.2.2 can be utilized to provide the input parameters to the BP neural network model. This hybrid approach has obvious advantages in real-time, generalization, and data mining for flow shop scheduling problems. As shown in Figure 2, a hybrid scheduling method based on a dual BP neural network includes the following steps. First, the above problem considering the transport time is solved by a genetic algorithm to obtain the optimal scheduling sequence and generate the data set to solve the neural network training data set. Secondly, the network attributes related to scheduling are selected as the input parameters of the neural network by combining them with the supernetwork theory. Finally, an integrated neural network model is established and the two neural networks are trained using the data set. The trained neural networks are used to classify the operational priorities of each job in the new scheduling problem to obtain the final solution.

5.1. Selection and Transformation of Neural Network Input Parameters

The neural network 1 is used to preliminarily classify all operation tasks. Considering the characteristics of a scheduling problem, some traditional attributes of the operation tasks [28] and attributes extracted from the operation task–logistics–resource supernetwork are selected as the input parameters of the neural network 1, as follows: the node serial number in the operation tasks layer network, node out-degree in the operation tasks layer network, node out-degree in the job logistics layer network, high-order out-degree of nodes in the operation tasks layer network, node degree in the operation tasks layer network, node degree in the production resources layer network, clustering coefficient in the production resources layer network nodes, and the comprehensive loss in the production resources layer.
The neural network 2 aims to sequence the operations of conflicting tasks. Such conflicting tasks are caused by operation constraints and machine constraints. On the one hand, there are sequence constraints between the different operation tasks of a job; on the other hand, several tasks may share and compete for one machine, which creates conflicts among tasks, and it is important to determine the exact machining sequences. If there are several conflicting tasks in the flow shop scheduling, we assign all conflicting tasks into groups, with each containing at most two operation tasks. Further, the operations sequence of two conflicting tasks is subdivided based on the priority division of the neural network 1, which compares and classifies the input parameters in the neural network 1. Therefore, the input properties of neural network 2 can be described as follows: each attribute in neural network 1 is used as a comparison item to compare each attribute of the conflict tasks.

5.2. Selection and Transformation of Neural Network Output Parameters

As the size of the scheduling problem increases, sequencing the conflicting operations can greatly increase the computational time. Therefore, this section uses a dual BP neural network to schedule all tasks in the operations sequence. The neural network 1 is trained to initially prioritize the tasks, and the neural network 2 is trained to rank conflicting tasks with the same priority. This approach can effectively reduce the computational time and accelerate the solution process. In neural network 1, the priority is initially divided into six levels, while in neural network 2, the priority is divided into two levels.

5.3. Structure of Neural Network Scheduler

BP neural networks 1 and 2 contain an input layer, output layer, and hidden layer. It is necessary to determine the number of nodes in each layer to establish the neural network model.
(1) Input layer. According to Section 5.1, the input layer of BP neural network 1 consists of eight basic parameters and the input layer of BP neural network 2 consists of eight basic parameters. The number of nodes in the neural network input layer is determined according to the number of codes corresponding to each basic parameter.
(2) Output layer. The output targets of BP neural network 1 are initially determined as six priorities, so there are six output nodes. The output target of BP neural network 2 is the conflict process sequence, and the corresponding results can only be pre-processing and post-processing; that is, the number of corresponding output nodes is two.
(3) Hidden layer. The number of hidden nodes in the BP neural network is determined by an empirical formula. The number of nodes in the hidden layer has an important influence on the classification accuracy and computational complexity of the neural network. The initial number of hidden layer nodes can be determined by empirical formulas [29]. For example, n 1 = n + m + a , where n1 is the number of hidden nodes, n is the number of inputs, m is the number of outputs, and a is between 1 and 10. In this paper, the number of nodes in the hidden layer of the BP neural network 1 is preliminarily set as 6–16, and the number of nodes in the hidden layer of the BP neural network 2 is selected as 4–16. Ultimately, the number of nodes in the hidden layer needs to be repeatedly debugged to find the optimal solution.

6. Numerical Examples

The algorithm is programmed by Matlab R2018b and runs on PC with Intel G5400 CPU, 3.70 GHZ main frequency, and 4 GB memory. UCINET 6 software is used to establish and analyze the model for the operation task–logistics–resource supernetwork.

6.1. Construct the Training Data Set

The training process of the neural network is to discover the rule of the data set. For example, the optimal scheduling result sets are similar, so the neural network model can be used to train for mining the optimal scheduling scheme data set. However, the scheduling scheme cannot be directly used as the training data of the neural network. Instead, the corresponding input parameters should be used to describe the scheduling information reflected in the scheduling scheme, so as to transform the scheduling scheme into a training data set and obtain the corresponding binary coding to form the input set of training samples. At the same time, the position of each task in the sequence is transformed into the priority and used as the target output to obtain the output set of the training sample composed of 6-bit binary codes.
In this section, based on the problem model in Section 3, a genetic algorithm is first used to solve the flow shop scheduling problem considering the transport time to obtain the optimal set of scheduling solutions as the training data set for the neural network. The parameters of the genetic algorithm are set as follows: number of individuals = 40; maximum number of generations = 500; selection rate = 0.9; crossover rate = 0.8; and mutation rate = 0.6. Considering the transport time of jobs between the buffer site and machines, as well as between different machines, the tasks and corresponding time of the flow shop scheduling problem are shown in Table 3. The case consists of 8 machines, 2 buffer sites, and 6 jobs, each of which contains 6 process tasks, outbound and inbound tasks. The corresponding numbers of machines are 1–8, and the corresponding numbers of buffer sites are 9 and 10, respectively, and the numbers of the AGVs are 11–16, respectively. Here, the AGVs are assigned with the order strategy, where the transport tasks during the jobs’ machining are executed from start to finish by the defined AGV.
For the pair values in Table 3, the first indicates the machine, AGV, or storage area number used for the operation task; and the second indicates the processing time, transport time, and inbound and outbound time associated with the storage area, respectively. For example, in the row Process 1 and column job A, (3, 1) denotes that operation task 1 of job A is machined by machine 3, and its processing time is 1 min; the route of 1st job includes (Buffer site-10) → (Machine-3) → (AGV-11) → (Machine-1) → (AGV-11) → (Machine-7) → (AGV-11) → (Machine-4) → (AGV-11) → (Machine-8) → (AGV-11) → (Machine-5) → (Buffer site-9).

6.2. Supernetwork Topology Feature Extraction

The topological characteristics of the flow shop production logistics can be used as input parameters to train the neural networks. Firstly, according to the establishment method of the operation task–logistics–resource supernetwork model, the adjacency matrix of each layer network is obtained in Appendix A. The specific construction method is as follows:
(1)
If there is no edge relation between the two nodes, the value at the corresponding position in the adjacency matrix is 0.
(2)
If there is an edge relation between the two nodes, the value at the corresponding position in the adjacency matrix is not 0. The values in the network adjacency matrix of the operation tasks layer represent the processing time between the start node and the end node (0.01 means there is an adjacency relationship, but it is an incoming and outgoing task, so the processing time is ignored). The values of the network adjacency matrix of the job logistics layer represent the logistics transport time between the start node and the end node. The values in the network adjacency matrix of the production resources layer represent the logistics transport time between the start node and the end node.
According to the adjacency matrix and each network model, UCINET software can be used to obtain the network model diagram of the operation task layer and the network topology characteristics of each node, as shown in Table 4. Since the outbound and inbound tasks do not affect the scheduling sequence, only the characteristic attributes of operation tasks other than outbound and inbound tasks are retained in Table 4. All network features in the table, including those of subsequent resource layer nodes, correspond to the attribute description in Section 4.2.2.
According to the statistics of the network topology characteristics of the operation task and the logistics layer in Table 4, the sequence number, out-degree, high-order out-degree, degree value of each node of the operation task, and the out-degree of the logistics layer node of the corresponding operation task can be obtained. These characteristics reflect the state evaluation of the production process and logistics transport. At the same time, in order to intuitively describe the sequence constraints and resource constraints between conflict operation tasks, the degree value with constraints is set as 0.01. Otherwise, weighted edges cannot be constructed.
According to the machine network adjacency matrix and supernetwork adjacency matrix, the partial network topology characteristics of each node can be obtained. At the same time, according to the determination of China Machines Association in 2015, the average MTBF time of machines made in Germany is 2000 h, and the average MTBF time of machines made in China is 1100 h. Assuming that machines 1, 2, 3, and 4 are made in Germany, and machines 5, 6, 7, and 8 are made in China, the loss rates per unit time can be preliminarily selected as 0.000008% and 0.000017%, respectively. To sum up, the selected network topology features were extracted and counted, and the machine comprehensive loss was calculated based on the superedge attributes of machine nodes. Statistics on the characteristics of machining resources are shown in Table 5.

6.3. Training the BP Neural Network

Firstly, the genetic algorithm is used to solve the procedure sequence of the optimal scheduling scheme under the existing scheduling rules. Secondly, based on the scheduling sequence, processes are classified according to selected input parameters, and data sets are generated. Then, the scheduling data set of the optimal scheduling sequence is selected, and the coding of each operation task and its priority can be obtained according to the coding table and the corresponding coding of each scheduling sequence operation task; that is, the training data set of BP neural network 1 can be obtained. According to the obtained scheduling scheme, the data set is composed, and BP neural network 1 is established. The parameters of BP neural network 1 are set as follows: the input node is 22, the node number of the hidden layer 1 is 20, the node number of hidden layer 2 is 10, the node number of the output layer is 6, the learning rate is 0.0001, and the maximum convergence algebra is 1000.
The confusion matrix shows the training results of BP neural network 1 and reflects the accuracy of neural network BP1 in classifying the priority of operational tasks, as shown in Table 6. In the field of machine learning, a confusion matrix is also called a possibility matrix or error matrix. A confusion matrix is a summary of the prediction results of classification problems. It uses the count value to summarize the number of correct and incorrect predictions and subdivides according to each category to obtain accuracy.
The values on the diagonal in Table 6 represent the correct times predicted by the neural network for each type of value, for example, for a production task at priority 1. In the test example, the number of times that the production task is classified to priority 1 is 2665, the number of times that it is classified to priority 2 is 335, and the number of times that it is classified to priorities 3, 4, 5, and 6 is 0. Therefore, the number of correct classifications was 2665, and the prediction accuracy of the procedure with priority 1 was 2665 in proportion to all the prediction times. At the same time, the classification accuracy rate of the neural network for each category and the overall classification accuracy rate are given in Table 6.
Similar problems have been solved in previous studies. For example, in the paper of a flow shop scheduling algorithm based on an artificial neural network [30], the classification accuracy of the BP neural network model is 70%, while in the paper on neural network scheduling based on complex network features [31], the classification accuracy of a BP neural network is 76.2%. Compared with these papers, we added topological feature indexes of the supernetwork, which made the neural network model describe the scheduling process more comprehensively, and further improved the training results.
In order to further determine the scheduling sequence, BP neural network 2 can be used for training again after the priority division of all processes is realized by BP neural network 1. The parameters of neural network 2 are set as follows: the input node is 24, the number of nodes in the hidden layer is 16, the number of nodes in the output layer is 2, the learning rate is 0.00001, and the maximum convergence algebra is 1000. Table 7 shows the training results of BP neural network 2, which reflect the accuracy for priority classification of conflicting tasks and the overall classification accuracy.
By using BP neural network 1 and BP neural network 2, the maximum completion time of the scheduling scheme is 84 according to the sequencing results of each operation task of the flow shop scheduling problem, with the consideration of the transport time. Scheduling Gantt charts can be generated based on scheduling sequences, as shown in Figure 3, in which the number represents the start and end time of operation tasks and logistics transport links. The logistics transport link includes the logistics tasks involving warehouses and production links. Among them, the gray squares crossed by dashed lines denote the logistics of the warehouse link; The squares filled with dashed lines denote the logistics of the production links. Operation tasks are represented by colored squares.

6.4. Experimental Comparison

To verify the performance of the dual BP neural network and compare with other optimization methods, we use three sets of flow shop scheduling problems considering transport time, with 100 iterations of each method in the experiments. To better reflect the pattern of six algorithms (GA/PSO/SA/LPT/SPT/Dual BP) in finding the optimal solution, the completion time and computational times of 100 iterations are computed. The first set of flow shop scheduling cases FT06 includes 8 machines, 6 jobs, and 6 operation tasks for each job; the second set of flow shop scheduling cases LA01–LA05 has 13 machines, 10 jobs, and 10 operation tasks for each job; and the third set of flow shop scheduling cases LA26–LA30 has 13 machines, 20 jobs, and 10 operation tasks for each job. Table 8 shows the completion time of each algorithm, and Table 9 shows the computational time of each algorithm.
Based on both Table 8 and Table 9, it can be seen that scheduling with a dual BP neural network will result in a shorter completion time and a shorter computational time, and its optimization results (completion time) have a higher stability compared with other algorithms. Although the genetic algorithm can sometimes obtain a good value in the objective function, the results are unstable. In fact, the genetic algorithm has a poor performance on average, and it takes a much longer computational time. Although particle swarm optimization and simulated annealing particle swarm optimization end with relatively stable results in terms of the completion time, these two approaches are difficult to obtain the optimal results because the solution procedures often lead to local minimum. Although the computational time of traditional SPT and LPT scheduling rules is short, these scheduling rules are restrictively applicable to a simple scheduling environment and lack the consideration of multiple factors in the multi-scheduling process. For example, SPT and LPT only consider the processing time of each production task and cannot take into account the transport time of logistics.
Meanwhile, comparing the experiments of different problem scales, it can be seen that the dual BP neural network model introduced in this paper outperforms other scheduling rules in solving the flow shop scheduling problem considering the transport time. Moreover, after the BP neural network training is completed, the optimal scheduling results can be obtained with less computational time with new problems of the same scale, especially in cases where the problem scale becomes larger. Actually, this is the challenging case for the traditional scheduling methods, as the associated computational time increases sharply or the solution quality deteriorates quickly; therefore, their application becomes questionable in reality. Although it takes some time to train the model of the BP neural network, it is not necessary to re-train the network model for each new problem computation, and the scheduling solution can be generated based on the trained model. In summary, the BP neural network scheduler can obtain a better solution in a short time with good feasibility.

7. Conclusions

Taking flow shop production and logistics operations as the object, we have established a scheduling model by integrating the supernetwork and neural network. The genetic algorithm is used to solve the flow shop scheduling problem considering the transport time to obtain the data set, and the key topological characteristics have been extracted as part of the input parameters for the neural network.
(1) Based on the hierarchical relationship and heterogeneous relationship between the node elements in the flow shop production and logistics, an operation task–logistics–resource supernetwork model based on the multi-layer complex network is developed. Machines, jobs, and buffer sites are mapped as nodes, and the in-layer/inter-layer correlations between nodes are mapped as the superedge. It overcomes the shortcomings of traditional models in analyzing the relationship between heterogeneous elements and also improves the hierarchical analysis in modeling complex networks.
(2) We have designed the dual BP neural network scheduling model of the flow shop production logistics. The supernetwork features are extracted into input parameters, such as the node degree, node strength, and clustering coefficient. In the numerical example, three sets of flow shop scheduling problems considering the transport time are tested by six algorithms. The results show that the dual BP neural network model has a better performance in terms of completion time and computational time, and, moreover, it has a higher stability than other algorithms. The construction of the operation task–logistics–resource supernetwork model presents a comprehensive perspective. It takes into account the correlations between the different layers of factors that affect the operation of the flow shop production logistics. Therefore, the supernetwork model improves the classification accuracy of the neural network. Meanwhile, unlike other intelligent algorithms that solve the problem iteratively from initial solution to optimal solution, the dual neural network scheduling can meet the requirements of real-time scheduling in the flow shop by learning from historical optimal data.
The paper has proposed a scheduling method combining supernetwork and neural networks, and it provides a new way to solve the flow shop production logistics. The superiority of the above method mainly depends on the completeness and balance of the training data obtained by the genetic algorithm and the more accurate input parameters provided by the operation task–logistics–resource supernetwork. Therefore, further consideration can be given to establishing corresponding databases and rules for each scheduling stage, or to consider algorithms such as KKT (Karush–Kuhn–Tucker) to obtain a more comprehensive sample set. In addition, we can extend the dual BP neural networks scheduling model to consider processes with flexible resources (machines) for another type of flexible scheduling problem.

Author Contributions

H.M. and Z.W. wrote the paper; S.W. provided the funding acquisition; G.Z. collected the related data; J.C. developed a software testing system; F.Z. provided the research idea. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the China University Industry University Research Innovation Fund (No. 2020ITA04008). In addition, the authors also would like to thank Ou Tang of Linköping University, Sweden, for his valuable comments and constructive criticism.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1 shows the adjacency matrix of each layer network in the operation task–logistics–resource supernetwork model.
Table A1. Network adjacency matrix of production resources layer.
Table A1. Network adjacency matrix of production resources layer.
ResourcesMachine 1Machine 2Machine 3Machine 4Machine 5Machine 6Machine 7Machine 8Buffer 1Buffer 2
Machine 10200000020
Machine 22030030030
Machine 30302003002
Machine 40020000001
Machine 50000020010
Machine 60300203020
Machine 70030030203
Machine 80000002002
Buffer 12000100000
Buffer 20001000200

References

  1. Zheng, T.; Ardolino, M.; Bacchetti, A.; Perona, M. The applications of industry 4.0 technologies in manufacturing context: A systematic literature review. Int. J. Prod. Res. 2021, 59, 1922–1954. [Google Scholar]
  2. Liu, W.H.; Hou, J.H.; Yan, X.Y.; Tang, O. Smart logistics transformation collaboration between manufacturers and logistics service providers: A supply chain contracting perspective. J. Manag. Sci. Eng. 2021, 6, 25–52. [Google Scholar] [CrossRef]
  3. Winkelhaus, S.; Grosse, E.H. Logistics 4.0: A systematic review towards a new logistics system. Int. J. Prod. Res. 2020, 58, 18–43. [Google Scholar] [CrossRef]
  4. Hu, Y.; Wu, X.; Zhai, J.J.; Lou, P.H.; Qian, X.M.; Xiao, H.N. Hybrid task allocation of an AGV system for task groups of an assembly line. Appl. Sci. 2022, 12, 10956. [Google Scholar] [CrossRef]
  5. Xiao, H.N.; Wu, X.; Qin, D.J.; Zhai, J.J. A collision and deadlock prevention method with traffic sequence optimization strategy for UGN-based AGVS. IEEE Access 2020, 8, 209452–209470. [Google Scholar] [CrossRef]
  6. Li, Y.F.; Tao, F.; Cheng, Y.; Zhang, X.Z.; Nee, A.Y.C. Complex networks in advanced manufacturing systems. J. Manuf. Syst. 2017, 43, 409–421. [Google Scholar] [CrossRef]
  7. Nagurney, A.; Dong, J. Supernetworks: Decision-Making for the Information Age; Edward Elgar Publishers: Chelthenham, UK, 2002; pp. 803–818. [Google Scholar]
  8. Esteso, A.; Peidro, D.; Mula, J.; Díaz-Madroñero, M. Reinforcement learning applied to production planning and control. Int. J. Prod. Res. 2023, 61, 5772–5789. [Google Scholar] [CrossRef]
  9. Zhou, G.H.; Chen, Z.H.; Zhang, C.; Chang, F.T. An adaptive ensemble deep forest based dynamic scheduling strategy for low carbon flexible job shop under recessive disturbance. J. Clean. Prod. 2022, 337, 130541. [Google Scholar] [CrossRef]
  10. Ahn, J.; Kim, H.J. A branch and bound algorithm for scheduling of flexible manufacturing systems. IEEE Trans. Autom. Sci. Eng. 2023, 21, 4382–4396. [Google Scholar] [CrossRef]
  11. Hajibabaei, M.; Behnamian, J. Fuzzy cleaner production in assembly flexible job-shop scheduling with machine breakdown and batch transportation: Lagrangian relaxation. J. Comb. Optim. 2023, 45, 112. [Google Scholar] [CrossRef]
  12. Leu, J.S.; Chen, C.F.; Hsu, K.C. Improving heterogeneous SOA-based IoT message stability by shortest processing time scheduling. IEEE Trans. Serv. Comput. 2014, 7, 575–585. [Google Scholar] [CrossRef]
  13. Kruk, L.; Lehoczky, J.; Ramanan, K.; Shreve, S. Heavy traffic analysis for EDF queues with reneging. Ann. Appl. Probab. 2011, 21, 484–545. [Google Scholar] [CrossRef]
  14. Della Croce, F.; Scatamacchia, R. The longest processing time rule for identical parallel machines revisited. J. Sched. 2020, 23, 163–176. [Google Scholar] [CrossRef]
  15. Meng, L.L.; Duan, P.; Gao, K.Z.; Zhang, B.; Zou, W.Q.; Han, Y.Y.; Zhang, C.Y. MIP modeling of energy-conscious FJSP and its extended problems: From simplicity to complexity. Expert Syst. Appl. 2024, 241, 122594. [Google Scholar] [CrossRef]
  16. Meng, L.L.; Cheng, W.Y.; Zhang, B.; Zou, W.Q.; Fang, W.K.; Duan, P. An improved genetic algorithm for solving the multi-AGV flexible job shop scheduling problem. Sensors 2023, 23, 3815. [Google Scholar] [CrossRef] [PubMed]
  17. Fontes, D.; Homayouni, S.M.; Gonçalves, J.F. A hybrid particle swarm optimization and simulated annealing algorithm for the job shop scheduling problem with transport resources. Eur. J. Oper. Res. 2023, 306, 1140–1157. [Google Scholar] [CrossRef]
  18. Zhang, C.Y.; Jiang, P.Y.; Zhang, L.; Gu, P.H. Energy-aware integration of process planning and scheduling of advanced machining workshop. Proc. Inst. Mech. Eng. Part B-J. Eng. Manuf. 2017, 231, 2040–2055. [Google Scholar]
  19. Meng, L.L.; Zhang, C.Y.; Zhang, B.; Gao, K.Z.; Ren, Y.P.; Sang, H.Y. MILP modeling and optimization of multi-objective flexible job shop scheduling problem with controllable processing times. Swarm Evol. Comput. 2023, 82, 101374. [Google Scholar] [CrossRef]
  20. Atsmony, M.; Mor, B.; Mosheiov, G. Single machine scheduling with step-learning. J. Sched. 2022, 27, 227–237. [Google Scholar] [CrossRef]
  21. Yang, Y.Q.; Chen, X.; Yang, M.L.; Guo, W.; Jiang, P.Y. Designing an industrial product service system for robot-driven sanding processing line: A reinforcement learning based approach. Machines 2024, 12, 136. [Google Scholar] [CrossRef]
  22. Golmohammadi, D. A neural network decision-making model for job-shop scheduling. Int. J. Prod. Res. 2013, 51, 5142–5157. [Google Scholar] [CrossRef]
  23. Zhang, G.H.; Lu, X.X.; Liu, X.; Zhang, L.T.; Wei, S.W.; Zhang, W.Q. An effective two-stage algorithm based on convolutional neural network for the bi-objective flexible job shop scheduling problem with machine breakdown. Expert Syst. Appl. 2022, 203, 117460. [Google Scholar] [CrossRef]
  24. Kumar, H.; Giri, S. Optimisation of makespan of a flow shop problem using multi layer neural network. Int. J. Comput. Sci. Math. 2020, 11, 107–122. [Google Scholar] [CrossRef]
  25. Zhang, Z.C.; Wang, W.P.; Zhong, S.Y.; Hu, K.S. Flow shop scheduling with reinforcement learning. Asia-Pac. J. Oper. Res. 2013, 30, 5. [Google Scholar]
  26. Wang, S.Y.; Li, J.X.; Jiao, Q.S.; Ma, F. Design patterns of deep reinforcement learning models for job shop scheduling problems. J. Intell. Manuf. 2024, preprint. [Google Scholar] [CrossRef]
  27. Zhang, F.Q.; Jiang, P.Y. Complexity analysis of distributed measuring and sensing network in multistage machining processes. J. Intell. Manuf. 2013, 24, 55–69. [Google Scholar] [CrossRef]
  28. Burdett, R.L.; Corry, P.; Yarlagadda, P.; Eustace, C.; Smith, S. A flexible job shop scheduling approach with operators for coal export terminals. Comput. Oper. Res. 2019, 104, 15–36. [Google Scholar] [CrossRef]
  29. Shen, H.Y.; Wang, Z.X.; Gao, C.Y. Determining the number of bp neural network hidden layer units. J. Tianjin Univ. Technol. 2008, 5, 13–15. [Google Scholar]
  30. Cao, C.Q.; Jin, W.Z. Job-shop scheduling using artificial neural network. Comput. Knowl. Technol. 2016, 12, 204–207. (In Chinese) [Google Scholar]
  31. Zou, M. Research on Complex Network Features Based Neural Network Scheduler for Job Shop Scheduling Problem. Master’s Thesis, Huazhong University of Science and Technology, Wuhan, China, 2019. [Google Scholar]
Figure 1. Operation task–logistics–resource supernetwork model.
Figure 1. Operation task–logistics–resource supernetwork model.
Systems 12 00339 g001
Figure 2. Neural network scheduler diagram.
Figure 2. Neural network scheduler diagram.
Systems 12 00339 g002
Figure 3. Flow shop scheduling results generated by dual BP neural networks.
Figure 3. Flow shop scheduling results generated by dual BP neural networks.
Systems 12 00339 g003
Table 1. Mathematical notations.
Table 1. Mathematical notations.
Mathematical SymbolsMeanings
J = {J1, J2, …, Ji, …, Jn}Jobs set
Ji (i = 1, 2, …, n)ith job
M = {M1, M2, …, Mm}Machines set
Mk (k = 1, 2, …, m)kth machine
C = {C1, C2, …, Cp, …, CP}Buffer sites set
Cp (p = 1, 2, …, P)pth buffer site
Oijjth operation of job i
RiNumber of operations of job i
Oi(j−1)Previous operation of job i of Oij
Oi’j’Previous operation on the machine of Oij
PTijkProcessing time of the jth operation of job i on machine k
STijkStarting time of the jth operation of job i on machine k
ETijkCompletion time of the jth operation of job i on machine k
TTkmTransport time between machine Mk and machine Mm
Zijk0–1 decision variable. The value is 1 if Oij is processed on machine k, 0 otherwise.
Aikm0–1 decision variable. The value is 1 if job i is transported from machine k to machine m, 0 otherwise.
Bijghk0–1 decision variable. The value is 1 if the next process in the Oij on machine k is Ogh, 0 otherwise.
FTiCompletion time of job i
LA positive large number
FTmaxMaximum completion time for all jobs
Table 2. The definitions of each sub-network.
Table 2. The definitions of each sub-network.
Sub-NetworksElementsDefinitions
Production resources layer GRProduction resource node set VRIt includes machines for processing various jobs and buffer sites for storage of raw or Work-In-Process (WIP) materials.
Production resource edge set ERIt refers to the transport relationship between the above nodes (machines and machines, machines and buffer site).
Production resource edges weight set wRThe weight denotes the transport time corresponding to two nodes and reflects the spatial constraints between production resource nodes in the flow shop and their correlations in the logistics flow.
Job logistics layer sub-network GTLogistics node set VTIt is a mapping from the operation task network to the production resources network. Its nodes denote the machines and buffer sites.
Logistics edge set ETIt denotes the transport relationship between logistics nodes according to the operation tasks.
Logistics edge weight set wTIts value represents the logistics transport time between the corresponding machines in the process and reflects the degree of logistics flow in the process.
Operation tasks layer sub-network GpOperation task node set VpIt refers to all machining tasks and transport tasks from the raw materials to the finished parts.
Operation task edge set EpThere are two kinds of related edges. The first is a directed edge indicating a sequential operations relationship for the same job to be machined on different machines, and the second is an undirected edge showing a constraint operations relationship for the same machines to machine different jobs.
Operation task edge weight set wpThe weight is only for directed edges and its value is determined by the completion time of the operation task in the previous node.
Table 3. Flow shop scheduling case considering transport time (Time unit: min).
Table 3. Flow shop scheduling case considering transport time (Time unit: min).
JobsABCDEF
Outbound10, 29, 310, 29, 310, 29, 3
Process 13, 12, 83, 52, 53, 92, 3
Transport11, 512, 313, 314, 215, 316, 5
Process 21, 33, 57, 41, 52, 34, 3
Transport11, 812, 613, 314, 515, 616, 7
Process 37, 66, 106, 83, 57, 56, 9
Transport11, 412, 313, 414, 315, 316, 3
Process 44, 77, 101, 97, 36, 47, 10
Transport11, 312, 813, 1014, 215, 416, 5
Process 58, 31, 108, 18, 81, 35, 4
Transport11, 712, 713, 714, 515, 716, 7
Process 65, 64, 45, 76, 94, 13, 1
Inbound9, 110, 19, 19, 210, 110, 2
Time567864575162
Table 4. Statistics of partial characteristic attributes of operation tasks and logistics layer.
Table 4. Statistics of partial characteristic attributes of operation tasks and logistics layer.
Node N i O D P H O D P D P O L D P Node N i H O D P H O D P D P O L D P
A-2011.0526.216.005.00D-2015.0335.234.002.00
A-3023.0425.165.008.00D-3025.0430.205.005.00
A-4036.0522.126.004.00D-4035.0525.166.003.00
A-5047.0316.074.003.00D-5043.0520.116.002.00
A-6053.029.043.007.00D-6058.0217.063.005.00
A-7066.026.023.001.00D-7069.049.045.002.00
B-2018.0346.244.003.00E-2019.0525.246.003.00
B-3025.0538.216.006.00E-3023.0316.194.006.00
B-40310.0433.165.003.00E-4035.0513.166.003.00
B-50410.0523.126.008.00E-5044.048.115.004.00
B-60510.0413.075.007.00E-6053.044.075.007.00
B-7063.033.034.001.00E-7061.031.034.001.00
C-2015.0534.226.003.00F-2013.0330.224.005.00
C-3024.0529.176.003.00F-3023.0327.194.007.00
C-4038.0425.125.004.00F-4039.0424.165.003.00
C-5049.0417.085.0010.00F-50410.0515.126.005.00
C-6051.028.043.007.00F-6054.025.073.007.00
C-7067.027.023.001.00F-7061.051.056.003.00
Table 5. Statistics of some characteristic attributes of production resource layer.
Table 5. Statistics of some characteristic attributes of production resource layer.
MachinesMachine 1Machine 2Machine 3Machine 4
Degree2.004.004.002.00
Clustering coefficient0.5000.2500.2500.500
Out-degree of superedge30.0019.0026.0014.00
Comprehensive loss0.0002500.0001580.0002170.000117
MachinesMachine 5Machine 6Machine 7Machine 8
Degree2.004.004.002.00
Clustering coefficient0.5000.2500.2500.500
Out-degree of superedge17.0040.0038.0012.00
Comprehensive loss0.0002550.0006000.0005700.000180
Table 6. Confusion matrix for BP neural network 1.
Table 6. Confusion matrix for BP neural network 1.
Forecast/ActualPriority 1Priority 2Priority 3Priority 4Priority 5Priority 6
Priority 126653350000
Priority 23352165500000
Priority 30500233546500
Priority 400165183500
Priority 50003351835330
Priority 60001654652670
Classification accuracy (%)88.8372.1777.8365.5479.7889.00
Average accuracy (%)80.37
Table 7. Confusion matrix for BP neural network 2.
Table 7. Confusion matrix for BP neural network 2.
Forecast/ActualPriority 0Priority 1
Priority 04750210
Priority 13503500
Classification accuracy (%)93.1394.34
Average accuracy (%)93.74
Table 8. Comparison of scheduling solution results (Unit: min).
Table 8. Comparison of scheduling solution results (Unit: min).
CasesCompletion TimeGAPSOSALPTSPTDual BP
FT06-R1Minimum value87918610110688
Average value94.897.295.710110691.8
FT06-R2Minimum value103105110119122104
Average value114.5111.1118.7119122110.3
FT06-R3Minimum value90949510911193
Average value104.8100.4109.110911198.2
FT06-R4Minimum value818384919483
Average value86.490.294.5919486.1
FT06-R5Minimum value687378828468
Average value76.883.687.3828475.4
LA01Minimum value795817815919879798
Average value883.4893.7875.9919879852.2
LA02Minimum value782843877952910843
Average value897.1882.7913.8952910864.5
LA03Minimum value735775807822940782
Average value807.2817.8836.1822940798.9
LA04Minimum value740768778865891801
Average value847.6835.8852.9865891831.4
LA05Minimum value643653694741719656
Average value713.1702.9711.6741719696.8
LA26Minimum value232923372346263927302359
Average value2519.32445.42569.9263927302550.85
LA27Minimum value243424002438279129362464
Average value2592.22554.22661.6279129362546.7
LA28Minimum value229423412376265329042314
Average value2482.92496.12578.7265329042476.5
LA29Minimum value231823712340251226252323
Average value2466.12500.22595.1251226252572.2
LA30Minimum value235523552424261627612380
Average value2557.82530.92598.4261627612569.2
Table 9. Comparison of computational time (Unit: s).
Table 9. Comparison of computational time (Unit: s).
CasesGAPSOSALPTSPTDual BP
FT06-R110.2113.113.710.370.341.01
FT06-R211.1312.913.160.370.380.98
FT06-R311.0912.162.990.390.361.21
FT06-R49.8911.873.820.350.361.14
FT06-R59.9612.172.870.360.391.09
LA0125.1534.657.910.780.813.12
LA0225.8735.186.780.770.803.41
LA0324.0534.166.440.750.823.36
LA0424.9835.787.130.750.793.67
LA0525.2735.966.610.760.783.59
LA2635.6944.6719.171.001.047.12
LA2737.8845.6118.931.131.026.92
LA2834.7243.8218.861.040.997.43
LA2934.1944.5917.910.980.977.01
LA3037.2845.1419.241.031.017.08
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mu, H.; Wang, Z.; Chen, J.; Zhang, G.; Wang, S.; Zhang, F. A Flow Shop Scheduling Method Based on Dual BP Neural Networks with Multi-Layer Topology Feature Parameters. Systems 2024, 12, 339. https://doi.org/10.3390/systems12090339

AMA Style

Mu H, Wang Z, Chen J, Zhang G, Wang S, Zhang F. A Flow Shop Scheduling Method Based on Dual BP Neural Networks with Multi-Layer Topology Feature Parameters. Systems. 2024; 12(9):339. https://doi.org/10.3390/systems12090339

Chicago/Turabian Style

Mu, Hui, Zinuo Wang, Jiaqi Chen, Guoqiang Zhang, Shaocun Wang, and Fuqiang Zhang. 2024. "A Flow Shop Scheduling Method Based on Dual BP Neural Networks with Multi-Layer Topology Feature Parameters" Systems 12, no. 9: 339. https://doi.org/10.3390/systems12090339

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop