Next Article in Journal
Comparison Study on the Estimation of the Spatial Distribution of Regional Soil Metal(loid)s Pollution Based on Kriging Interpolation and BP Neural Network
Next Article in Special Issue
The Development Evaluation of Economic Zones in China
Previous Article in Journal
Emotional Regulation in Young Adults with Internet Gaming Disorder
Previous Article in Special Issue
Measurement of Scenic Spots Sustainable Capacity Based on PCA-Entropy TOPSIS: A Case Study from 30 Provinces, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Environment-Aware Production Scheduling for Paint Shops in Automobile Manufacturing: A Multi-Objective Optimization Approach

School of Economics and Management, Xiamen University of Technology, Xiamen 361024, China
Int. J. Environ. Res. Public Health 2018, 15(1), 32; https://doi.org/10.3390/ijerph15010032
Submission received: 22 November 2017 / Revised: 19 December 2017 / Accepted: 20 December 2017 / Published: 25 December 2017
(This article belongs to the Special Issue Green Environment, Green Operations and Sustainability)

Abstract

:
The traditional way of scheduling production processes often focuses on profit-driven goals (such as cycle time or material cost) while tending to overlook the negative impacts of manufacturing activities on the environment in the form of carbon emissions and other undesirable by-products. To bridge the gap, this paper investigates an environment-aware production scheduling problem that arises from a typical paint shop in the automobile manufacturing industry. In the studied problem, an objective function is defined to minimize the emission of chemical pollutants caused by the cleaning of painting devices which must be performed each time before a color change occurs. Meanwhile, minimization of due date violations in the downstream assembly shop is also considered because the two shops are interrelated and connected by a limited-capacity buffer. First, we have developed a mixed-integer programming formulation to describe this bi-objective optimization problem. Then, to solve problems of practical size, we have proposed a novel multi-objective particle swarm optimization (MOPSO) algorithm characterized by problem-specific improvement strategies. A branch-and-bound algorithm is designed for accurately assessing the most promising solutions. Finally, extensive computational experiments have shown that the proposed MOPSO is able to match the solution quality of an exact solver on small instances and outperform two state-of-the-art multi-objective optimizers in literature on large instances with up to 200 cars.

1. Introduction

In recent years, the Chinese government has enforced strict regulations to deal with pollutions in the manufacturing industry [1]. The regulatory pressure urges relevant companies to pay more attention to sustainability aspects of their operational systems with an aim of reducing pollutant emissions. The latest research has revealed that production scheduling could serve as a cost-effective tool for realizing the goal of sustainable manufacturing [2]. For example, Zhang et al. [3] develop a time-indexed integer programming formulation to identify production schedules that minimize energy consumption under TOU (time-of-use) tariffs. Liu and Huang [4] investigate a batch-processing machine scheduling problem and a hybrid flow shop scheduling problem with carbon emission criteria. Zhou et al. [5] apply a genetic algorithm (GA) for the optimization of production schedules in textile dyeing industries with a clear aim of reducing the consumption of fresh water.
Production scheduling is a system-level decision which determines the processing sequence of jobs (orders) in each production unit. Conventional scheduling research has mostly been focused on profit-driven performance indicators such as makespan (for measuring production efficiency) and total flowtime (for measuring work-in-process inventory). To incorporate environmental considerations, it is possible to introduce sustainability-inspired objectives into the scheduling models so that the resulting job processing sequence can achieve a satisfactory trade-off between the traditional performance goal and the pollution reduction goal.
This paper reports a new study based on the motivation and methodology stated above. In particular, we consider the scheduling of a paint shop in automotive manufacturing systems, where pollutant emissions are mainly caused by the frequent cleaning operations performed on painting devices such as spray guns. The cleaning process inevitably leads to discharge of unconsumed paints and detergents which contain hazardous chemicals. Therefore, the scheduling function should attempt to minimize the frequency of cleaning by arranging cars that require identical or similar colors to be processed in a consecutive manner (because a deep cleaning is needed whenever the painting equipment is preparing to switch to a drastically different color for painting). However, considering the requirement on pollution reduction alone is not feasible in practice, due to the fact that the paint shop is coupled with the subsequent assembly shop via a buffer system with limited resequencing capacity, which means the sequencing decision for the paint shop will have a strong impact on possible processing sequences for the assembly shop. In this case, the preferences of the assembly shop must be considered simultaneously and thus should be integrated into the scheduling problem for the paint shop. This clearly defines a bi-objective optimization problem, in which one objective function is concerned with minimization of pollutant emissions while the other objective function reflects the major criterion adopted by the assembly shop (we will consider due date performance in this paper because the assembly shop must strive to deliver finished products on time to the final testing department). To solve such a complicated production scheduling problem with reasonable computational time, we will present a highly modified multi-objective particle swarm optimization (MOPSO) algorithm with enhanced search abilities.
The remainder of this paper is organized as follows. Section 2 provides a brief literature review on the publications related with the scheduling of automotive manufacturing processes. Section 3 introduces the production environment considered in our research (with a focus on the buffer system) and then formulates the studied bi-objective production scheduling problem using a mixed-integer programming model. Section 4 deals with the subproblem of scheduling the assembly shop under a given schedule for the paint shop and the intermediate buffer. Section 5 presents the main algorithm, i.e., the proposed MOPSO for solving the bi-objective integrated production scheduling problem. Section 6 gives the computational results together with statistical tests to compare the proposed algorithm with an exact solver and two high-performing generic multi-objective optimizers published in recent literature. Finally, Section 7 concludes the paper and discusses potential directions for future research.

2. Literature Review

2.1. The Color-Batching Problem

A line of research that is closely related to our study deals with the color-batching problem, which concerns the use of a buffer system to adjust the car sequence inherited from the upstream body shop so that the resulting sequence best suits the need of the paint shop. In particular, the objective is to minimize the number of color changes (or equivalently, maximizing the average size of color blocks) in the output sequence.
Spieckermann et al. [6] present a formulation of the color-batching process as a sequential ordering problem and propose a branch-and-bound (B&B) algorithm to find the optimal output sequence with the minimum number of color changes. Moon et al. [7] conduct a simulation study for designing and implementing a color rescheduling storage (CRS) in an automotive factory and suggest some simple fill and release policies for operating the selectivity bank. Hartmann and Runkler [8] present two ant colony optimization (ACO) algorithms to enhance simple rule-based color-batching methods. The two ACO algorithms are used for handling the two stages of online resequencing, i.e., filling and releasing, respectively. Sun et al. [9] propose two heuristic procedures (namely, arraying and shuffling heuristics) for achieving quick and effective color-batching. The arraying heuristic is applied in the filling stage, while the shuffling heuristic is used in the releasing stage. Experiments show that the two proposed heuristics can work jointly to obtain competitive color-batching results with very short computational time. Ko et al. [10] investigate the color-batching problem on M-to-1 conveyor systems, with motivations from the resequencing problem at a major Korean automotive manufacturer. The authors first develop a mixed-integer linear programming (MILP) model and a dynamic programming (DP) algorithm for a special case of the problem (i.e., 2-to-1 conveyor system), and then propose two efficient genetic algorithms (GAs) to find near-optimal solutions for the general case.
In addition to the above-mentioned method of using a buffer system to resequence cars physically, another way of implementing color-batching is to perform a virtual resequencing of cars. In this case, car positions in the sequence remain unchanged, but the painting colors are reassigned among those cars which share identical product attributes (and therefore are indistinguishable) at the moment. The color-batching problem based on the virtual resequencing strategy is also referred to as the paint shop problem for words (PPW), which has been shown to be NP -complete [11]. Xu and Zhou [12] present four heuristic rules for solving this problem and also propose a beam search (BS) algorithm based on the best heuristic rule. Some researchers, for example Amini et al. [13] and Andres and Hochstättler [14], present heuristic methods for solving a special case of the PPW, i.e., PPW ( 2 , 1 ) or the binary PPW (involving only two painting colors), where each type of car body appears twice in the sequence and have to be painted with different colors. Recently, Sun and Han [15] show that integrated physical and virtual resequencing can generally obtain noticeably better color-batching results than the conventional physical resequencing.

2.2. The Car Sequencing Problem

The research on scheduling of automobile manufacturing processes has mainly focused on the car sequencing problem (first described in [16]). The problem concerns the sequencing of cars in the assembly shop, where different options (e.g., sunroof, air-conditioning) are to be installed on the cars by the corresponding workstations distributed along the assembly line. To prevent overload for a workstation, the cars which require the same option have to be spaced out in the processing sequence. Such restrictions are modeled as ratio constraints. Regarding the r-th option, the ratio constraint n r : p r indicates that in any subsequence of p r cars, there should be no more than n r cars requiring this option. The scheduling objective is therefore to find a sequence that minimizes the number of constraint violations ( NP -hard in the strong sense [17,18]).
Golle et al. [19] present a graph representation of the car sequencing problem and develops an exact solution approach based on iterative beam search. Using improved lower bounds, the proposed approach is shown to be superior to the best known exact solution procedure, and can even be applied to problems of real-world size. In addition to exact solution methods, there are also some approaches built on the hybridization of meta-heuristics and mathematical programming techniques. Zinflou et al. [20] propose three hybrid approaches based on a genetic algorithm which incorporates crossover operators using an integer linear programming model for the construction of offspring solutions. It is shown that the hybrid approaches outperform a genetic algorithm with local search and other algorithms found in the literature. Thiruvady et al. [21] design a hybrid algorithm (also called a matheuristic) by integrating Lagrangian relaxation (LR) and ant colony optimization (ACO) for the car sequencing problem. According to the experiments on various LR heuristics, ACO algorithm, and different hybrids of these methods, it is found that the two-phase LR+ACO method where ACO uses the LR solutions to produce its pheromone matrix is the best-performing method for up to 300 cars.
A very well-known variation of the car sequencing problem is the one proposed by the French car manufacturer Renault and used as subject of the ROADEF’2005 challenge [22]. The Renault problem differs from the standard problem in that, besides ratio constraints imposed by the assembly shop, it also introduces paint batching constraints and priority classifications. The aim is to find a common processing sequence for both the paint shop and the assembly shop such that a lexicographically defined objective function is minimized. Due to the large number of cars and tight computational time limit, the algorithms that rank the first 10 places in the final competition all belong to the heuristic category. Estellon et al. [23] describe a first-improvement descent heuristic using a variety of neighborhood operators randomly, which is further enhanced by a strategy to speed up neighborhood evaluations through the use of incremental calculation. Ribeiro et al. [24] design a set of heuristics mostly based on variable neighborhood search (VNS) and iterated local search (ILS). Quick neighborhood evaluations and ad-hoc data structures are also a key feature in their method. Briant et al. [25] present a simulated annealing (SA) algorithm in which the probabilities of acceptance are computed dynamically so that the search process tends to favor the moves that have the best success rate so far among all the possible moves. Gavranović [26] presents a heuristic based on variable neighborhood search (VNS) and tabu search (TS). The author also proposes a data structure to speed up penalty evaluation for ratio constraints and exploits the concept of an alphabet to improve the number of batch colors. As a matter of fact, the Renault version of car sequencing problem has attracted long-lasting research interest. Most recently, Jahren and Achá [27] revisit this problem to investigate how to close the gap between exact and heuristic methods. The authors report new lower bounds for 7 out of the 19 instances used in the final round of the competition by applying an improved integer programming formulation. In addition, a novel column generation-based exact algorithm is proposed for solving the problem, which outperforms an existing branch-and-bound approach.
Besides the standard version and the Renault version of car sequencing problems, researchers are also studying extended car sequencing problems with additional constraints, objective functions or decision types. Zhang et al. [28] propose an artificial ecological niche optimization (AENO) algorithm for a car sequencing problem with an additional objective of minimizing energy consumption in the sorting process, which is an important issue but often neglected in previous research. Computational experiments show that the proposed AENO algorithm achieves competitive results compared with the most effective heuristics for the conventional objectives, and in the meantime, it realizes considerable reduction of energy consumption. Yavuz [29] studies the combined car sequencing and level scheduling problem which aims at finding the optimal production schedule that evenly distributes different models over the planning horizon and meanwhile satisfies all ratio constraints for the options. The author proposes a parametric iterated beam search algorithm for the combined scheduling problem, which can be used either as a heuristic or as an exact solution method. Chutima and Olarnviwatchai [30] apply a special version of the estimation of distribution algorithm (EDA) called extended coincident algorithm (COIN-E) to a multi-objective car sequencing problem based on a more realistic production setting, namely, two-sided assembly lines. Three objectives are minimized simultaneously in the Pareto sense, including the number of paint color changes, the number of ratio constraint violations and the utility work (i.e., uncompleted operations which must be finished by additional utility workers).
Based on the literature review, the limitations of previous research can be summarized in the following two aspects. Firstly, the existing scheduling models require that the same processing sequence is adopted by both the paint shop and the assembly shop, without considering the resequencing opportunity provided by a buffer connecting the two shops. Secondly, the ratio constraints for installing options in the assembly shop have been overemphasized while the environmental impact of the cleaning operations in the paint shop have not been precisely measured. In fact, most of the modern automobile manufacturers adopt a buffer system to connect the shops, and the increased level of automation suggests that the ratio constraints are not so binding as before. On the other side, environmental protection has evidently become a major concern in the manufacturing sector. Under such a background, this paper aims at dealing with the new challenge on sustainability-conscious production scheduling in the contemporary car manufacturing industry.

3. Problem Formulation

3.1. The Production Setting

In a typical automotive manufacturing system, painting and assembly operations are performed in two sequential workshops connected with a resequencing buffer. Figure 1 provides an illustration of the production setting considered in this study.
Suppose that a set of n cars { 1 , 2 , , n } are to be processed successively in the paint shop. When the next car in the processing sequence requires a different color than the previous car, a setup operation is needed to clean the painting equipment (e.g., spray guns) thoroughly. The cleaning procedure is accompanied by the use of a chemical detergent, and the resulting discharge of unconsumed paint will directly lead to sewage emissions. Therefore, it is desirable to have the cars with identical or similar colors processed consecutively (as blocks) so as to reduce the frequency of color changes in the processing sequence of paint shop. Formally, for any two colors ( e 1 and e 2 ) from the set of all possible colors { 1 , 2 , , E } , let δ e 1 e 2 denote the amount of pollutant emissions caused by the setup operation that is needed between the painting of e 1 and the painting of e 2 . For example, when a light color immediately follows a dark color, the painting devices need a deep cleaning and consequently the emission level is higher. The aim of paint shop scheduling is to minimize the total amount of pollutant emissions produced by the cleaning operations.
Once the cars have been painted, they will be released from the paint shop one after another in their original order. Then, there will be an opportunity to resequence the cars by utilizing the buffer system in order to meet the preferences of the subsequent assembly shop. In this study, we consider the due date preferences. Formally, for each car i, a due date d i is given according to the requirement of customers, which is expressed in terms of latest position in the production sequence. For example, if car i is preferred to be sequenced in the first 10 positions in the assembly shop, we set d i = 10 , which means a tardiness cost will be incurred in case the car is placed after the 10th position in the processing sequence. Position-based due date specification makes sense because the production in assembly shop is organized according to a fixed cycle time (also known as paced assembly line). Once the processing sequence is fixed, the time of completing and releasing each car from the assembly shop is also known. In addition, a priority weight w i is assigned to each car i, which reflects the relative value and importance of its related customer. The aim of assembly shop scheduling is therefore to minimize the total weighted tardiness defined as i = 1 n w i ( π ^ i d i ) + , where ( x ) + = max { x , 0 } and π ^ i represents the actual position of car i in the processing sequence of assembly shop. In this study, we have ignored the ratio constraints (cf. Section 2.2) based on the following two observations (particularly for Chinese car manufacturers). First, the advanced automation technology applied in contemporary car assembly lines significantly reduces the occurrence of workstation overloading. Second, manufacturing of medium-grade cars has gradually transformed to a mass production mode, which means the number of independent optional features has decreased. Despite these facts, however, it must be noted that ratio constraints are still important concerns for European (and more specifically German) manufacturers of high-end cars.
According to the above descriptions, it is very clear that paint shop and assembly shop have their individual goals which are in fact mutually independent. The major difficulty in the integrated scheduling of both shops arises from the fact that the resequencing buffer connecting them has a limited capacity, which means the n cars cannot be completely resequenced after leaving the paint shop and before entering the assembly shop. Therefore, it is necessary to make a compromise between the goal of paint shop (total pollutant emissions) and the goal of assembly shop (total weighted tardiness) by building a bi-objective optimization model that is able to produce a set of Pareto solutions for the decision-makers.

3.2. The Resequencing Buffer System

The buffer system that connects paint shop and assembly shop offers an opportunity to partially resequence the cars. Among the various types of buffer systems mentioned in [31], selectivity bank (also referred to as mix bank) is the most commonly used system for physical resequencing in the automobile manufacturing industry because of its low cost and high flexibility.
A selectivity bank consists of L parallel lanes, where the l-th lane has c l spaces for storing cars. At the entrance of the buffer, a car may choose to enter any of the lanes with unoccupied spaces and join the queue in that lane. At the exit of the buffer, the first car in any nonempty lane may be released into the processing sequence for assembly shop. Clearly, the resequencing ability of a selectivity bank depends on the number of lanes. If L = n , then it can realize a complete resequencing of n cars and output any sequence as needed. In reality, however, L is certainly much less than the number of cars to be resequenced, and therefore the selectivity bank can only implement a partial resequencing of cars. The general rule is: it is not possible to change the relative order of two cars if they have entered the same lane (because each lane is equivalent to a first-in first-out queue structure).
Figure 2 gives an example to illustrate the function of a selectivity bank with two lanes ( L = 2 ) and two spaces in each lane ( c 1 = c 2 = 2 ). Initially, the four cars are sequenced according to their indexes, i.e., in the order of 1, 2, 3, 4 (Step (a)). To move car 3 to the very first position, we must let car 3 enter a different lane than the one chosen by car 1 and car 2 because car 3 needs to “jump” over the two cars (Step (b)). Finally, car 3 is first released from the selectivity bank, and consequently the sequencing of the four cars is altered to 3, 1, 2, 4 (Step (c)). Note that it is not possible to realize a sequence like 3, 2, 1, 4 because of the limited number of lanes.

3.3. The MILP Model

We will formulate the paint shop scheduling problem as a mixed-integer linear programming (MILP) model. First, a group of 0−1 decision variables are introduced as follows.
x i k = 1 if car i is processed in the k - th position in paint shop , 0 otherwise .
x ^ i k = 1 if car i is processed in the k - th position in assembly shop , 0 otherwise .
y e k = 1 if the car processed in the k - th position in paint shop requires color e , 0 otherwise .
z i l = 1 if car i enters the l - th lane in the buffer area , 0 otherwise .
With these decision variables, the complete MILP model can be established. Note that the other decision variables in the following model (e.g., Y e 1 e 2 k , Φ i j , T i ) are all defined on the basis of these fundamental variables. We use M to denote a very large positive number.
Minimize T P E = k = 2 n e 1 = 1 E e 2 = 1 E ( δ e 1 e 2 · Y e 1 e 2 k )
T W T = i = 1 n ( w i · T i )
subject to : i = 1 n x i k = i = 1 n x ^ i k = 1 , k = 1 , , n
k = 1 n x i k = k = 1 n x ^ i k = 1 , i = 1 , , n
l = 1 L z i l = 1 , i = 1 , , n
i = 1 n z i l c l , l = 1 , , L
y e k = i = 1 n ( β e i · x i k ) , k = 1 , , n , e = 1 , , E
Y e 1 e 2 k y e 1 ( k 1 ) + y e 2 k 1 , k = 1 , , n , e 1 , e 2 = 1 , , E
Y e 1 e 2 k 0 , k = 1 , , n , e 1 , e 2 = 1 , , E
k = 1 n ( k · x i k ) k = 1 n ( k · x j k ) M · Φ i j , i , j = 1 , , n
k = 1 n ( k · x i k ) k = 1 n ( k · x j k ) M · ( Φ i j 1 ) , i , j = 1 , , n
k = 1 n ( k · x ^ i k ) k = 1 n ( k · x ^ j k ) M · ( 2 z i l z j l ) + M · Φ i j , i , j = 1 , , n , l = 1 , , L
T i k = 1 n ( k · x ^ i k ) d i , i = 1 , , n
T i 0 , i = 1 , , n
x i k , x ^ i k , y e k , z i l , Φ i j { 0 , 1 } , i , k = 1 , , n , e = 1 , , E , l = 1 , , L
Equation (5) defines the first objective, i.e., minimizing the total pollutant emissions ( T P E ) caused by the setup operations in paint shop ( δ e 1 e 2 represents the amount of emissions during a setup operation to switch from color e 1 to color e 2 ). The binary variable Y e 1 e 2 k is defined in such a way that Y e 1 e 2 k = 1 if and only if the car in the ( k 1 ) -th position is painted with color e 1 (i.e., y e 1 ( k 1 ) = 1 ) and meanwhile the car in the k-th position is painted with color e 2 (i.e., y e 2 k = 1 ); Y e 1 e 2 k = 0 otherwise (i.e., either y e 1 ( k 1 ) = 0 or y e 2 k = 0 ). Equation (6) defines the second objective, i.e., minimizing the total weighted tardiness ( T W T ) incurred by late finishing of cars in the subsequent assembly shop ( w i represents the priority weight of car i). Equations (7) and (8) reflect the assignment constraints, i.e., each car should be assigned to exactly one position in the processing sequence and each position in the sequence must be occupied by exactly one car. Likewise, Equation (9) specifies that each car can only choose to enter one lane of the selectivity bank. Equation (10) requires that the number of cars entering lane l should not exceed its capacity denoted by c l (we can assume c l = if the cars move through the buffer in a dynamic manner). Equation (11) defines y e k , which equals 1 if and only if the car in the k-th position should be painted with color e. Note that β e i is a parameter known in advance ( β e i = 1 if car i requires color e and β e i = 0 otherwise). Equations (12) and (13) provide the definition for Y e 1 e 2 k based on y e 1 ( k 1 ) and y e 2 k (note that the two inequalities are enough to make Y e 1 e 2 k binary variables). Equations (14) and (15) are used to define Φ i j , which depicts the relative order of two cars i and j in the paint shop: Φ i j = 1 if car i is processed after car j, and Φ i j = 0 if car i is processed before car j. We need Φ i j as intermediate variables to reflect the impact of the paint shop sequence on the possible sequences for assembly shop (note that Φ i j is used in the following Equation (16)). Equation (16) describes the constraint imposed by the selectivity bank: the relative order of two cars cannot be altered if they travel through the same lane. In particular, if car i has been scheduled before car j in the paint shop (i.e.,  Φ i j = 0 ) and both cars have entered the l-th lane of the buffer (i.e.,  z i l + z j l = 2 ), then car i is definitely positioned before car j when they leave the buffer for the next production step. Equations (17) and (18) evaluate the tardiness of car i in the assembly shop, which is defined as T i = max { π ^ i d i , 0 } , with π ^ i denoting the position of car i in the processing sequence of assembly shop and d i representing the preferred latest position of car i.

3.4. Further Discussion

The studied production system consists of two sequential stages with an additional intermediate buffer. Minimizing the T P E in the first production stage is equivalent to minimizing the sum of sequence-dependent setup times (more accurately, it is the same as the problem 1 | s i j | C max [32] which is further equivalent to the traveling salesman problem and thus strongly NP -hard). The problem of the second stage is minimizing T W T under precedence constraints (more precisely, 1 | chains ; p j = 1 | w j T j ), which is also NP -hard in the strong sense (more details are given in Section 4).
In fact, our problem is much more complicated than the two subproblems mentioned above because of the resequencing buffer located between the two production stages. The buffer is used for partially resequencing the cars after they leave the first stage and before they enter the second stage. In other words, the two stages can process different car sequences. However, the production system does not fall under the category of flow shops because it is impossible to generate an arbitrary processing sequence for the second stage given the processing sequence in the first stage (the buffer has a limited number of lanes and consequently it can only realize partial resequencing of the cars). What complicates the problem is the fact that the buffer system also requires an optimized decision regarding the allocation of cars to each lane.
Now it is fairly clear that scheduling decisions for the two production stages are tightly coupled. Solving the first-stage problem to optimality may lead to poor performance in terms of the second-stage criterion, and vice versa, which means the strategy of solving each subproblem individually for each production stage is apparently infeasible for addressing the whole problem. The only way of resolving the problem is to build an integrated scheduling model by incorporating the constraints imposed by the resequencing buffer. In this way, it is possible to take the preferences of both stages into consideration and obtain well-balanced schedules for the entire production system. This is exactly the motivation behind the integrated problem formulation.

4. The Assembly Shop Sequencing Subproblem

The main optimization algorithm which will be detailed in Section 5 deals with the decisions on car sequencing in the paint shop as well as the allocation of cars to the different lanes of the selectivity bank. A critical issue that arises in the meantime is how to sequence the cars in the downstream assembly shop under a fixed placement of cars in the selectivity bank. This subproblem needs to be solved properly for the evaluation of solutions in the main algorithm.
The assembly shop sequencing subproblem can be described as 1 | chains ; p j = 1 | w j T j according to the three-field notation system, based on the following observations:
  • The paced production mode in assembly shop means that each job has identical processing time ( p j = p ). In addition, the position-based due date assignment scheme suggests that the situation can be further simplified as p j = 1 .
  • Each lane of the selectivity bank is actually imposing a set of precedence constraints (in the chain form) on the relevant cars traveling through the lane. These precedence constraints must be respected when scheduling the assembly shop.
Lemma 1.
The problem 1 | p j = 1 | w j T j is polynomially solvable.
Proof. 
It can be shown that this problem is equivalent to the Assignment Problem (concerning the assignment of n jobs to n consecutive positions on a single machine):
min i = 1 n j = 1 n C ( i , j ) x i j
s . t . j = 1 n x i j = 1 , i = 1 , , n
i = 1 n x i j = 1 , j = 1 , , n
x i j { 0 , 1 } , i = 1 , , n , j = 1 , , n
The equivalence is established by setting the cost of assigning job i to the j-th position as C ( i , j ) = w i · max { j d i , 0 } . Therefore, the Hungarian algorithm can solve this problem within O ( n 3 ) time. ☐
Lemma 2.
The problem 1 | chains ;   p j = 1 | w j T j is NP -hard in the strong sense.
For proof of the lemma, readers are suggested to refer to the work of Leung and Young [33].

4.1. A Branch-and-Bound Algorithm

In view of the complexity results presented above, we propose a branch-and-bound (B&B) algorithm to solve the problem 1 | chains ; p j = 1 | w j T j , using solutions of the relaxed problem 1 | p j = 1 | w j T j as the basis for bounding. Schedules are constructed from the end to the front, i.e., backwards in time, considering the fact that the larger values of weighted tardiness are likely to correspond to the jobs that are scheduled more towards the end of the processing sequence. Therefore, it appears to be beneficial to schedule these jobs first in the branch-and-bound procedure. At the q-th level of the search tree, jobs are selected for the ( n q + 1 ) -th position. Under the L sets of chain-based precedence constraints, there are at most L branches going from each node at level q to level q + 1 (because only the last unscheduled job in each chain may be considered). It follows that the number of nodes at level q is bounded by L q . The solution of the relaxed problem without precedence constraints provides a lower bound for the original problem 1 | chains ; p j = 1 | w j T j . This bounding strategy is applied to the set of unscheduled jobs at each node of the search tree. If the lower bound ( L B ) is larger than or equal to the objective value of any feasible schedule, then the node will be discarded. The complete algorithm is formally described as Algorithm 1.
In the following, we make some comments for better explaining the algorithm. In Line 1, the variable for recording the best objective value obtained so far ( T W T min ) is initialized to be a very large positive number, and the set of nodes ( N ) is initialized with the root node which corresponds to a null sequence N 0 (the job to be put in each position is pending). The lower bound for N 0 is unimportant and thus L B ( N 0 ) is assigned with 0. The tree-type search is then started and the search process will be continued until the node set becomes empty. In each iteration, three major steps are performed, i.e., node selection, branching, and handling of offspring nodes.
  • The algorithm always selects the node with the smallest lower bound from N for further exploration (Line 3). The motivation is to focus on the promising subregion of the search space so that it is likely to discover a feasible solution with lower objective value, leading to more opportunities of pruning the search tree.
  • If the selected node N c has a lower bound below the current best objective value (upper bound), the algorithm has to further explore the node by creating branches on it. This is implemented in Line 6. In particular, the algorithm is attempting to insert jobs into the last vacant position of the current partial solution corresponding to N c . Constrained by the precedence relations given in the form of P 1 , , P L , only the last unscheduled job in each precedence chain P l ( l = 1 , , L ) is applicable for this purpose. Hence, at most L descendant nodes will be created.
  • For each descendant node N l c , the lower bound L B ( N l c ) is first obtained by employing a Hungarian algorithm to solve the relaxed scheduling problem (neglecting precedence constraints) which consists of the unscheduled jobs with respect to the partial solution of N l c (Line 8). Then, three cases are identified and handled separately.
    • If the schedule obtained after solving the relaxed problem ( π l c ) turns out to be a feasible solution for the original problem (which means respecting all the precedence relations), then the algorithm further investigates whether this solution defines a new upper bound and updates the relevant variables when necessary (Lines 11–14).
    • If the obtained schedule is not feasible for the original problem but its objective value (i.e., the lower bound L B ( N l c ) ) turns out to be larger than or equal to the current upper bound (i.e.,  T W T min ), the node N l c can be discarded or fathomed (Line 16) because there is no hope of finding better solutions by branching on N l c .
    • Finally, if the lower bound value is below the upper bound, the node should be explored in the subsequent search process, and therefore, it is added to the node set (Line 18).
Algorithm 1 B&B for 1 | chains ; p j = 1 | w j T j
Input: The weight ( w j ) and due date ( d j ) of each job j = 1 , , n , and L chains of jobs indicating their precedence relations ( P 1 , , P L )
1:
Let T W T min = and the node set N = { ( N 0 , L B ( N 0 ) ) } , where N 0 = ( , , ) is an empty sequence of n positions, with L B ( N 0 ) = 0 ;
2:
while N do
3: 
Select the node N c with minimum L B from N (i.e., to satisfy L B ( N c ) = min N N L B ( N ) );
4: 
N N \ { N c } ;
5: 
if L B ( N c ) < T W T min then
6:  
Branch on N c : locate the last unscheduled job (if any) in each chain P 1 , , P L , and put it into the last empty position in N c , thereby generating L nodes ( N 1 c , , N L c );
7:  
for l = 1 L do
8:   
Bound N l c : evaluate the lower bound L B ( N l c ) by solving the relaxed problem 1 | p j = 1 | w j T j for the unscheduled jobs (the schedule obtained is denoted as π l c );
9:   
if π l c is a feasible solution with respect to P 1 , , P L then
10:    
Let T W T ( π l c ) = L B ( N l c ) ;
11:    
if T W T ( π l c ) < T W T min then
12:     
T W T min T W T ( π l c ) ;
13:     
π opt π l c ;
14:    
end if
15:   
else if L B ( N l c ) T W T min then
16:    
Discard N l c (i.e., node N l c is fathomed);
17:   
else
18:    
N N { ( N l c , L B ( N l c ) ) } ;
19:   
end if
20:  
end for
21: 
end if
22:
end while
Output: The optimal solution π opt and its objective value T W T ( π opt )

4.2. An Illustrative Example

Consider a small instance with 4 jobs, the details of which are shown in Table 1. The precedence constraints are given by two chains P 1 = ( 1 4 ) and P 2 = ( 2 3 ) , which means job 1 must precede job 4 and job 2 must precede job 3.
The process of solving this instance with the proposed B&B algorithm is illustrated in Figure 3. The search begins with a null sequence which corresponds to the root node of the tree. In the first iteration, we branch on this node by determining the job for the last position. According to the precedence chains, only job 3 and job 4 are eligible, and thus two offspring nodes are created on level 1. Solving the relaxed subproblem for the left-hand node (for jobs 1, 2 and 4), we obtain the optimal sequence ( 4 , 1 , 2 ) with objective value 25. This is not a feasible solution because job 4 is positioned before job 1, which violates the precedence chain P 1 . So the left-hand node is associated with a lower bound of 25. Similarly, for the right-hand node, we obtain a lower bound of 10. In the second iteration, we pick out the node ( , , , 4 ) , because it has a smaller L B , and then create branches on it by choosing the job for the penultimate position. Since job 4 has been scheduled, the last unscheduled job in each chain is recognized to be job 1 (according to P 1 ) and job 3 (according to P 2 ), respectively. Thus, two new nodes are produced on level 2, where the left-hand node leads to a lower bound value of 14 while the right-hand node results in a feasible solution ( 1 , 2 , 3 , 4 ) (satisfying both P 1 and P 2 ) with objective value 25. In this case, the upper bound T W T min is updated to 25 and consequently the node ( , , , 3 ) is eliminated because L B ( , , , 3 ) T W T min . In the third iteration, we branch on the only node left in the active node set, i.e., ( , , 1 , 4 ) , and create one descendant node on level 3, considering that there is currently no unscheduled job in the chain P 1 and only job 3 in the chain P 2 is applicable for the second position. This new node directly leads to a feasible solution ( 2 , 3 , 1 , 4 ) with objective value 22, which turns out to be the optimal solution to the problem.

5. The Main Algorithm: MOPSO

5.1. Fundamentals of PSO

To start with, we provide a brief introduction of the basic principles of standard particle swarm optimization (PSO), which serves as the foundation for our multi-objective PSO algorithm.
The PSO algorithm mimics the flocking behavior of birds in the search of optimal solutions for single-objective continuous function optimization [34]. PSO starts with an initial population of particles (scattered randomly over the search space), which represent potential solutions to the considered problem. Each particle is associated with a fitness value which is obtained by evaluation of the objective function. A velocity vector is used to control the flying direction and speed of each particle. In each iteration, the velocity of each particle gets updated based on a trade-off between two incentives: (1) continuing its current flying direction (i.e., inertia); (2) aiming for the best-so-far positions that it knows (i.e., learning). In particular, there are two types of best-so-far positions. One type is the best position that each particle i has visited in its own search history, which is named the personal best position ( p b e s t i ). The other type is the best position discovered so far by all particles in the swarm, which is named the global best position ( g b e s t ). The PSO algorithm iteratively updates the velocity and position for each particle until the convergence condition is satisfied.
Formally, let x i = ( x i , 1 , x i , 2 , , x i , D ) and v i = ( v i , 1 , v i , 2 , , v i , D ) respectively denote the position and the velocity of the i-th particle ( i = 1 , 2 , , N ) in a D-dimensional search space. The personal best position of particle i, which records the best solution found by this particle, is denoted by b i = ( b i , 1 , b i , 2 , , b i , D ) . Meanwhile, the global best position, which can be understood as the best among all personal best positions, is stored in b such that f ( b ) = min i = 1 N f ( b i ) , where f ( · ) is the objective function to be minimized. In iteration t, the following equations will be applied to update the velocity as well as the position of each particle:
v i ( t + 1 ) = ω ( t ) v i ( t ) + c 1 ( t ) ξ 1 [ b i ( t ) x i ( t ) ] + c 2 ( t ) ξ 2 [ b ( t ) x i ( t ) ] ,
x i ( t + 1 ) = x i ( t ) + v i ( t + 1 ) ,
where ω , c 1 , c 2 0 are three key parameters of PSO, which are respectively referred to as the inertia weight, the cognitive acceleration coefficient and the social acceleration coefficient. Note that they can be set as time-variant parameters, which means their values may change with the iterations. ξ 1 and ξ 2 are random numbers generated from the uniform distribution U [ 0 , 1 ] , which are introduced to provide randomness to the search process of PSO.

5.2. Encoding and Decoding of Solutions

We adopt a random key-based encoding scheme to represent the sequencing of cars in the paint shop as well as the allocation of cars to the buffer lanes. A potential solution to the problem (i.e., a particle) is expressed by a vector of n real numbers x = ( x 1 , x 2 , , x n ) , where x i ( 0 , L ) . The decimal part of x i reflects the relative order of car i in the processing sequence of the paint shop, while the integer part of x i specifies the assignment of a lane in the selectivity bank for car i when it leaves the paint shop.
In the decoding process, we apply the SPV (smallest position value) rule to determine the position for each car i in the processing sequence of paint shop ( k i ). In particular, the n cars are sequenced according to a non-decreasing order of the corresponding values { x i x i : i = 1 , , n } . In addition, car i should enter the l i -th lane of the selectivity bank after leaving the paint shop, where l i = x i .
An example with n = 8 and L = 3 is given in Table 2 to illustrate the decoding policy. Based on the given solution x = ( 1.80 , 2.19 , 0.21 , 1.32 , 0.95 , 2.05 , 1.54 , 0.82 ) , the k i and l i values can be obtained as shown in the last row of the table. Clearly, the car sequence for the paint shop is achieved by sorting the decimal parts of { x 1 , , x 8 } in a non-decreasing order, i.e., ( 6 , 2 , 3 , 4 , 7 , 1 , 8 , 5 ) . After their completion in the paint shop, the 8 cars are to be moved into the 3-lane selectivity bank such that each lane is filled with the following subsequences: lane 1 with cars ( 3 , 8 , 5 ) , lane 2 with cars ( 4 , 7 , 1 ) , and lane 3 with cars ( 6 , 2 ) .

5.3. Evaluation of Solutions

In the evaluation stage, the objective value T P E can be directly obtained based on the decoded processing sequence for paint shop. As for the T W T value, the B&B algorithm presented in Section 4 can be applied if we aim for an accurate evaluation of the T W T objective. However, the B&B algorithm is unavoidably time-consuming, and therefore we only use it for identification of global best solutions, when accuracy is important to distinguish between the solutions (detailed information will be given in Section 5.7). In the majority of time, it is advantageous to use a heuristic rule that provides a reasonably good approximation of the T W T value with a reasonable computational effort. For this purpose, we decide to adopt the dispatching rule named ATC (Apparent Tardiness Cost), which has been shown to be effective for minimizing the T W T criterion [35].
The ATC rule works in a simulation context. Every time when the machine becomes idle, a priority index is calculated for each eligible job (in the case of our problem, the first car currently parked in each lane), and the job with the highest index value will be selected to be processed next. The priority index I j ( t ) is a function of the time t at which the machine becomes idle and is defined for job j as follows (note that it has been adapted to our problem setting):
I j ( t ) = w j · exp max ( d j 1 t , 0 ) K ,
where K is a scaling parameter for which we set a value of 4 (based on preliminary experiments). Meanwhile, t is represented by the number of jobs that have been finished at the time (due to the fact that p j = 1 , j ).

5.4. Initialization of Solutions

Although purely random initialization is possible for the PSO algorithm to work, it is often better to start with a set of structured solutions so as to accelerate the convergence of optimization. We have designed the following procedures for generating such a group of initial solutions.
Step 1:
Sort all the n cars in a non-decreasing order of their due dates. In the case of identical due dates, the cars with larger weights are prioritized. The sorted car sequence is denoted by S.
Step 2:
Select a car randomly from the first τ positions of S, recorded as car c. Let F = { c } , where F stands for the car sequence to be determined for the paint shop. Remove car c from S.
Step 3:
Scan the first τ cars in the current S. Identify the car c that leads to the minimum setup cost, i.e., such that δ ε ( c ) ε ( c ) = min c S τ { δ ε ( c ) ε ( c ) } , where ε ( c ) represents the color index of car c and S τ denotes the subsequence which consists of the first τ cars in S.
Step 4:
Append car c to the end of sequence F. Remove car c from S.
Step 5:
If S , let c = c and go back to Step 3. Otherwise, exit the procedure and output F.
The above procedure applies a window-based selection mechanism to determine the next car to be scheduled in an iterative manner. In each iteration, a car is selected from the first window of length τ in the initial sequence, which has been generated by the EDD (earliest due date) rule. The greedy selection policy together with the limitation imposed by the window length have contributed to achieving a trade-off between the pollution-minimization goal and the tardiness-reduction goal. The parameter τ is varied from 2 to n / 2 so that a number of different sequences can be produced to diversify the initial solutions.
Note that an initial solution should also incorporate an allocation of cars to the buffer lanes. To find a suitable allocation associated with the painting sequence determined above, the following procedure is devised, which decides the lane for each car by considering the need for minimizing T W T in the downstream assembly shop.
Input:
The processing sequence for paint shop, F = { c 1 , , c n } .
Step 1:
Solve the single machine scheduling problem 1 | p j = 1 | w j T j for the n cars. The optimal car sequence is recorded as π .
Step 2:
For each car c i F , find its position in π and denote the position index by α ( c i ) .
Step 3:
Let i = 1 . Define α ^ l = 0 for each lane l = 1 , , L .
Step 4:
For car c i , if there exists l such that α ( c i ) α ^ l > 0 , then let l i = arg min l { 1 , , L } { α ( c i ) α ^ l } ; otherwise, let l i = arg max l { 1 , , L } { α ( c i ) α ^ l } .
Step 5:
Push car c i to lane l i . Let α ^ l i = α ( c i ) .
Step 6:
Let i i + 1 . If i n , go back to Step 4. Otherwise, terminate the procedure.
Step 1 aims at the ideal processing sequence for paint shop (minimizing T W T with no precedence constraints). Then, the following steps are attempting to utilize the buffer system to transform the paint shop sequence F (which is already fixed) to a sequence that is as close to this ideal sequence as possible. When it is found that α ( c i ) α ^ l < 0 for all l, a violation against the ideal sequence occurs. It is generally impossible to reproduce the ideal sequence, so the focus is to use it as a guide for allocating the cars to the buffer lanes.
For example, we consider a case with 8 cars and a 3-lane buffer. Suppose that we have F = { c 1 , c 2 , , c 8 } and π = { c 4 , c 3 , c 7 , c 2 , c 6 , c 1 , c 8 , c 5 } . Therefore, it can be derived from Step 2 that { α ( c 1 ) , α ( c 2 ) , , α ( c 8 ) } = { 6 , 4 , 2 , 1 , 8 , 5 , 3 , 7 } . Executing Steps 3–6 will lead to the following allocation of cars to the three lanes (expressed in the form “ c i | α ( c i ) ”):
Lane 1 : c 1 | 6 c 5 | 8 Lane 2 : c 2 | 4 c 6 | 5 c 8 | 7 Lane 3 : c 3 | 2 c 4 | 1 c 7 | 3
which can produce the sequence π = { c 3 , c 4 , c 7 , c 2 , c 6 , c 1 , c 8 , c 5 } for the assembly shop. It is clear that π differs from π only in the first two positions. This violation is due to the step of inserting car c 4 | 1 immediately after car c 3 | 2 when α ( c 4 ) α ( c 3 ) < 0 .
To transform an initialized schedule to the encoded form, we need to associate each car i with a real number x i . Based on the initialization procedures, we assign x c i = l i 1 + i / ( n + 1 ) for the car that is ranked at the i-th position in the sequence F. It is thereby ensured that the encoded solution x = ( x 1 , , x n ) can be converted back to the original schedule after it is decoded.

5.5. Time-Variant Parameters

To improve the search performance of the proposed MOPSO, we adopt time-variant settings for three major parameters, i.e., ω , c 1 and c 2 .
The parameter ω determines the impact of the previous velocity on the current velocity of each particle. Setting a larger value for ω will promote extensive search for high-quality solutions, while setting a smaller value is beneficial for local search in the vicinity of the current position. It is well known that exploration and exploitation should be well balanced for any stochastic search algorithm to achieve good performance. The search pattern needs to be adjusted towards exploration in the beginning stage when there is limited information available about the landscape of search space. As the search progresses, more samples will be collected, and accordingly the search mode needs to be switched to more frequent exploitation so that the algorithm can make better use of the promising areas identified so far. Based on this logic, we apply a linearly decreasing policy for setting the parameter ω in iteration t ( t = 0 , , T ), i.e.,
ω ( t ) = ( ω e ω b ) t T + ω b ,
where ω b (resp. ω e ) indicates the beginning value (resp. ending value) of the parameter (satisfying ω b > ω e ), and T is the number of iterations for which ω is supposed to change over time (it is assumed that ω is fixed on the ending value from iteration T + 1 onwards, if the algorithm is not terminated).
The acceleration coefficients, namely c 1 and c 2 , can also produce a significant influence on the search behavior of PSO. Setting a larger value for c 1 and a smaller value for c 2 promotes distributed search, which leads to greater dispersion of particles in the search space. Conversely, setting a larger value for c 2 and a smaller value for c 1 will accelerate the convergence to the incumbent global best solution. Motivated by the fact, we apply a linearly decreasing policy for setting the parameter c 1 and a linearly increasing policy for setting the parameter c 2 in iteration t ( t = 0 , , T ), i.e.,
c 1 ( t ) = ( c 1 e c 1 b ) t T + c 1 b ,
c 2 ( t ) = ( c 2 e c 2 b ) t T + c 2 b ,
where c 1 b (resp. c 1 e ) denotes the beginning value (resp. ending value) of parameter c 1 (satisfying c 1 b > c 1 e ), and c 2 b (resp. c 2 e ) denotes the beginning value (resp. ending value) of parameter c 2 (satisfying c 2 b < c 2 e ). The setting of T follows the same rule as stated above, and similarly, c 1 and c 2 will remain at their ending values if the algorithm continues after iteration T.

5.6. Sorting of Solutions

In multi-objective optimization context, Pareto dominance is the basic criterion for distinguishing the quality of solutions. Hence, when sorting a set of solutions denoted by X , we should prioritize the Pareto dominance relations by first dividing the set into Q subsets X 1 , , X Q such that each solution x X q ( 2 q Q ) is dominated by at least one solution x X q 1 , and meanwhile, any pair of solutions from the same subset are mutually non-dominated. The algorithm for realizing such a Pareto-based sorting is detailed in [36].
The more important aspect of solution sorting lies in the technique for differentiating the solutions within each of these Pareto subsets (also called Pareto ranks), because the number of solutions in each X q (where there exists no mutual dominance relation) can be considerably large. The general idea for sorting the solutions in a non-dominated subset is to suppress the solutions that are crowded around by other solutions (regarding the objective space) and prioritize the solutions that are located in less crowded areas of the objective space. The motivation is to guarantee that the maintained solutions are well spread and can represent a wide variety of choices for the decision makers. The following procedure defines a crowding distance measure u i for each solution x i X q , which reflects the degrees of crowdedness in a quantitative manner.
Step 1:
Evaluate the distance between each pair of solutions x 1 , x 2 X q as D ( x 1 , x 2 ) = z = 1 Z ( d ¯ z ( x 1 , x 2 ) ) 2 1 2 , where d ¯ z ( x 1 , x 2 ) represents the normalized distance between x 1 and x 2 with respect to the z-th objective function, i.e., d ¯ z ( x 1 , x 2 ) = | f z ( x 1 ) f z ( x 2 ) | / ( f z max f z min ) , with f z max (resp. f z min ) denoting the maximum (resp. minimum) value of the z-th objective in X q . Z = 2 refers to the number of objectives in our problem.
Step 2:
For each solution x i X q , find the γ solutions that are situated most closely to x i in the objective space:
(2.1)
Let D i ( 1 ) = min θ D ( x i , x θ ) | x θ X q \ { x i } , θ ( 1 ) = arg min θ D ( x i , x θ ) | x θ X q \ { x i } .
(2.2)
For g = 2 , , γ , let D i ( g ) = min θ D ( x i , x θ ) | x θ X q \ { { x i } { x θ ( 1 ) , , x θ ( g 1 ) } } , θ ( g ) = arg min θ D ( x i , x θ ) | x θ X q \ { { x i } { x θ ( 1 ) , , x θ ( g 1 ) } } .
Step 3:
For each solution x i X q , calculate the crowding distance value as u i = 1 γ g = 1 γ D i ( g ) .
When evaluating u i , γ is a parameter that should be set properly to balance accuracy and efficiency. It is suggested to set γ = 4 for solving the problem studied in this paper. In a nutshell, we should sort solutions in a non-dominated set according to a decreasing order of their crowding distance values ( u i ). Based on the sorting result, some solutions in the back rank may have to be abandoned during the evolutionary process due to the limited size of storage for elite solutions.

5.7. Handling of Personal Best and Global Best Solutions

In the proposed MOPSO algorithm, the mechanisms for identifying and updating personal best and global best solutions have been redesigned to suit the multi-objective optimization settings.
(1) Mechanism for handling personal best
Based on the concept of Pareto optimality, the personal best that is maintained for each particle i should not be regarded as a single solution but rather a solution set which we denote by B i . The personal best solution set B i is maintained according to the rules described as follows. Initially, it is assumed that B i = { x i ( 0 ) } . Then, in each iteration t, the following steps are performed after the newly obtained solution x i ( t + 1 ) has been evaluated. If x i ( t + 1 ) is found to be dominated by some existing solution in B i , it will be neglected and B i is kept unchanged. If x i ( t + 1 ) is not dominated by any solution in B i , it will be incorporated into B i , and as a result, the originally existing solutions that are dominated by x i ( t + 1 ) will be eliminated from B i (if any).
To make sure that B i keeps the latest information acquired along the search trajectory of particle i, we set a common limit on the maximal size of B i and denote it by m p . Whenever the actual size of B i reaches m p + 1 , we simply remove the oldest solution in B i , so that B i memorizes the most recent m p elite solutions that have been visited by particle i.
In the process of calculating the updated velocity of particle i by Equation (24), the required item b i is selected randomly from the personal best set B i following a uniform probability distribution, which means each candidate solution in B i is considered with equal probability Pr = 1 / | B i | .
(2) Mechanism for handling global best
By definition, the global best is aimed at preserving the best-so-far solutions achieved by the entire swarm of particles. In the proposed MOPSO, the global best should also be characterized with a solution set, which we denote by B. Likewise, a limit of m g is placed on the maximal number of solutions that can be stored in B. In each iteration t, we apply the following procedure to update the global best solution set B based on the currently available personal best sets B i ( i = 1 , , N ).
Step 1:
Incorporate all solutions from B 1 B 2 B N into B.
Step 2:
Identify the first two Pareto ranks in B by performing a Pareto-based sorting of the solutions. Remove the solutions that belong to the other Pareto ranks from B.
Step 3:
Evaluate each solution in B using the exact approach (i.e., getting the T W T objective value by the B&B method detailed in Section 4).
Step 4:
Identify the first Pareto rank (i.e., the non-dominated solutions) in B. Remove the other solutions (those which are dominated) from B.
Step 5:
Sort the solutions in B according to the crowding distance measure (c.f. Section 5.6).
Step 6:
If | B | > m g , remove from B the ( | B | m g ) solutions that are ranked beyond the first m g places.
When updating particle i’s velocity with Equation (24), the item b is randomly selected from B based on the roulette wheel policy. Assuming that all solutions in B are sorted (as stated in Step 5), the probability of choosing the solution ranked at the k-th place is defined as:
Pr [ k ] = 2 ( | B | + 1 k ) | B | 2 + | B | , k = 1 , , | B | .
Under such a probability assignment, the possibility of selecting each solution decreases linearly according to the sorted order. For example, if | B | = 5 , the selection probability assigned to each solution (in the sorted order) is 5 / 15 , 4 / 15 , 3 / 15 , 2 / 15 , 1 / 15 , respectively.

5.8. Summary of the MOPSO Algorithm

We now provide an overall description of the proposed MOPSO algorithm. In addition, an associated flowchart is given as Figure 4 to help visualize the main structure of the algorithm (where the green lines indicate operations for storing information to and retrieving information from the personal best and global best solution sets).
Step 1:
[Initialization] Apply the procedures given in Section 5.4 to generate the initial positions for a total of N particles, i.e., { x i ( 0 ) | i = 1 , , N } . Initialize the particles’ velocities by generating each component of v i ( 0 ) randomly from [ L / 4 , L / 4 ] . Let B i = { x i ( 0 ) } (for i = 1 , , N ) and B = . Define the iteration index t = 0 .
Step 2:
[Global best] Update the global best solution set B based on the currently available personal best solution sets B 1 , , B N by applying the procedure given in Section 5.7 (part (2)).
Step 3:
[Termination test] If the termination condition is satisfied, terminate the algorithm with B as the output solutions. Otherwise, continue with the following steps.
Step 4:
[Time-variant parameters] Evaluate the current value of the time-variant parameters, i.e., ω ( t ) , c 1 ( t ) and c 2 ( t ) , using Equations (27)–(29), respectively.
Step 5:
[Velocity update] Determine b i by randomly selecting a solution from B i . Determine b by applying the roulette wheel method to select a solution from B. Based on the selected b i and b , update the velocity for each particle i according to Equation (24), thus yielding v i ( t + 1 ) .
Step 6:
[Position update] Update the position for each particle i according to Equation (25), yielding x i ( t + 1 ) . If any component of the new position vector falls below ε , it is reassigned with ε ; if any component value exceeds L ε , it is reassigned with L ε ( ε represents a very small constant, say, 0.001).
Step 7:
[Personal best] Update the personal best solution set B i for each particle i with the newly obtained solution x i ( t + 1 ) by following the rules stated in Section 5.7 (part (1)).
Step 8:
[Loop] Let t t + 1 , and then return to Step 2.

6. Computational Experiments and Results

6.1. Experimental Setup

To examine the performance of the proposed MOPSO algorithm in solving the studied problem, an extensive set of test instances have been generated in the following specifications inspired by real-world production data.
  • The number of cars (n) and the number of color options (E) are considered in a coordinated way at eight different levels, i.e., ( n , E ) { ( 50 , 3 ) , ( 50 , 6 ) , ( 100 , 6 ) , ( 100 , 10 ) , ( 150 , 9 ) , ( 150 , 12 ) , ( 200 , 10 ) , ( 200 , 15 ) } .
  • The number of lanes in the selectivity bank is considered at three levels, i.e., L { 10 , 15 , 20 } . We do not consider limitations on the capacity of each lane because it is assumed that the cars pass through the lanes dynamically.
  • The required color for each car i is randomly determined from the set { 1 , , E } with equal probability. The position-based due date of car i is set as d i = ζ i + 1 , where ζ i follows the binomial distribution B ( n 1 , 0 . 5 ) . The weight w i of car i is generated from the discrete uniform distribution U [ 1 , 10 ] .
  • The emission cost coefficient δ e 1 e 2 is determined by μ × ( e 2 e 1 ) for e 1 e 2 , where μ is generated from the uniform distribution U [ 1 , 2 ] . Meanwhile, it is assumed that δ e 2 e 1 = 0 . 75 × δ e 1 e 2 .
We have generated 8 × 3 × 5 = 120 test instances in accordance with the above standards (because we have 8 possible combinations of n and E as well as 3 possible values for L, and to increase reliability, 5 instances are generated for each scenario characterized by triplet ( n , E , L ) ).
Following the standards for benchmarking multi-objective optimization algorithms, we adopt four performance indicators to assess the quality of obtained solutions. Suppose that X is a set of non-dominated solutions achieved by a certain optimization algorithm. Then, the performance indicators can be defined as follows.
  • The ONVG (overall non-dominated vector generation) indicator [37] measures the number of solutions in the non-dominated solution set, i.e., ONVG ( X ) = X . Higher ONVG values suggest that the corresponding algorithm is able to provide a wider range of choices for the ultimate decision-making.
  • The CM (coverage metric) indicator [38] is defined on the basis of another non-dominated solution set ( Y ) for comparison with X . Formally, it is defined as
    C ( X , Y ) = { y Y | x X , x y } Y ,
    where x y indicates the case that either x dominates y or f ( x ) = f ( y ) (i.e., having equal objective vectors). It is clear that C ( X , Y ) reflects the proportion of solutions in Y that are dominated by (or equal to) some solution in X . Therefore, a higher value of C ( X , Y ) suggests better performance of the algorithm which produces X .
  • The D av and D max indicators [39] describe the distance between the solutions in X and a reference solution set R (ideally, the exact Pareto frontier of the problem) in the objective space. Formally, the distance metrics are defined as
    D av ( X , R ) = 1 | R | x R min x X { d ( x , x ) } ,
    D max ( X , R ) = max x R min x X { d ( x , x ) } ,
    where the reference set R is usually composed of all non-dominated solutions obtained by the compared algorithms as an approximation to the real Pareto frontier if the latter is unknown. In the above equations, d ( x , x ) = max z = 1 Z { ( f z ( x ) f z ( x ) ) / Δ z } where Δ z = f z max f z min is the value interval for the z-th objective function. Clearly, smaller values of D av and D max suggest that the solutions in X are closer to the estimated Pareto frontier.
  • The TS (Tan’s Spacing) indicator [40] reflects how evenly the solutions in X are distributed. It is defined as
    T S ( X ) = 1 D ¯ 1 | X | i = 1 | X | ( D i D ¯ ) 2 ,
    where D ¯ = ( 1 / | X | ) i = 1 | X | D i with D i denoting the Euclidean distance between x i X and its closest neighbor solution in X (with regard to the objective space). Smaller values of T S indicate that the solutions are distributed more evenly and thus are more preferable for decision-making.
Based on preliminary experiments, the MOPSO parameter settings that will be adopted in the subsequent computational tests are listed as follows:
  • Number of particles in the swarm: N = 100 ;
  • Beginning and ending values of inertia weight: ω b = 0.7 , ω e = 0.4 ;
  • Beginning and ending values of acceleration coefficients: c 1 b = 2.5 , c 1 e = 0.5 , c 2 b = 0.5 , c 2 e = 2.5 ;
  • Size limit on personal best solution sets: m p = 4 ;
  • Size limit on global best solution set: m g = 25 .
The beginning and ending values of parameters ω , c 1 and c 2 are chosen based on the suggestions given in [41]. The only difference lies in the beginning value of ω (i.e., ω b ), which we have set to 0.7 in spite of the suggested value of 0.9. The main reason is that the proposed MOPSO algorithm relies on a heuristic initialization technique rather than completely randomized initial solutions, and consequently, our algorithm does not need a largely diversified search in the beginning stage. Preparatory experiments have shown that the setting of 0.7 for ω b leads to the most desirable results.

6.2. Evaluation of Optimality

The proposed MOPSO is first compared with an exact MILP solver to reveal the ability of finding optimal solutions. The solutions for comparison have been obtained by the CPLEX solver based on weighted sum approach with objective function rewritten as f = λ · T P E + ( 1 λ ) · T W T . The weighting coefficient λ is enumerated from 0.01 to 0.99 with a step size of 0.01. Considering that the exact solver is only able to address small-sized instances within reasonable time, the group of smallest test instances (i.e., with ( n , E , L ) = ( 50 , 3 , 10 ) ) are utilized for making the comparison. The proposed MOPSO is executed 20 times independently for solving each instance (with 150 s allowed per run) and then the values of the four performance indicators are calculated. The resulting data are shown in the average sense in Table 3.
The results have clearly revealed that the proposed MOPSO has achieved remarkable solution quality. In the average sense, the solutions found by the MOPSO can cover 45% of the solutions obtained by CPLEX (with identical objective vectors). In addition, the MOPSO has realized low values of D av and D max (0.011 and 0.027, respectively), which suggests that the obtained solutions are sufficiently close to the CPLEX solutions. In terms of the ONVG metric, the MOPSO performs slightly worse than CPLEX (in view of the number of obtained solutions). However, when it comes to the TS metric, we can see that the evenness degree of solution distribution is comparable between the two approaches (1.39 vs. 1.37 on average). It is worth noting that the weighted sum method may fail to find certain solutions if the Pareto frontier is not fully convex. As a result, C 2 can be less than 1 for some instances. Finally, when computational time is taken into account, it is safe to conclude that the proposed MOPSO is much more efficient. By contrast, the CPLEX solver has consumed more than three hours for solving each instance.

6.3. Comparison with Typical MOEAs

To provide a systematic performance evaluation, we will compare the proposed MOPSO with high-performing multi-objective evolutionary algorithms (MOEAs) in the literature. In particular, the MOEA/D-ACO [42] and the pccsAMOPSO [43] have been selected for the comparison purpose. The former is an algorithm developed by combining the merits of the well-known MOEA/D (multi-objective evolutionary algorithm based on decomposition) and ACO (ant colony optimization), while the latter relies on a novel strategy called parallel cell coordinate system to balance convergence and diversity in the evolutionary process of multi-objective PSO. These two algorithms can represent the state-of-the-art techniques for solving generic multi-objective optimization problems. In the following computational experiments, the parameters of the two compared algorithms are determined based on the suggested values in the original publications and then fine-tuned to suit the features of our problem.
The proposed MOPSO as well as the two compared algorithms MOEA/D-ACO and pccsAMOPSO have been implemented with Visual C++ 2015 on a PC platform (Intel Core i7-4790 3.60GHz CPU, 16GB RAM, Windows 10 OS). To make sure that the comparisons are fair enough, we impose a hard limit on the computational time that is available for each algorithm in a single execution, i.e., 3 × n × L 10 seconds are allowed for solving an instance with n cars and L lanes in the selectivity bank. Under such settings, the tested algorithm must stop running and immediately output the current non-dominated solution set as soon as the time budget is used up. Consequently, the number of generations that can be evolved in the execution of an algorithm is not a fixed constant but dependent on the complexity of the tasks required to be performed in each iteration.
Each of the algorithms, including MOPSO, MOEA/D-ACO and pccsAMOPSO, has been run 20 times independently for solving each test instance. The averaged values of the four performance indicators resulting from the 20 runs are reported in Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10 and Table 11 (grouped by n, the number of cars). In Table 8, Table 9, Table 10 and Table 11, X , Y and Z are respectively used to denote the solution set output by the MOPSO, MOEA/D-ACO and pccsAMOPSO. An asterisk in the tables denotes that the corresponding instance has the same size as the previous instance. To examine the statistical significance of the results, we have performed paired-sample t-tests on the indicator values in a pairwise manner (MOPSO vs. MOEA/D-ACO and MOPSO vs. pccsAMOPSO). The p-values obtained from the t-tests are reported in Table 12 and Table 13.
The following comments can be made regarding the computational results.
  • Focusing on the ONVG indicator, we can see from the “Avg.” row that the proposed MOPSO has obtained more non-dominated solutions than the compared algorithms in the average sense (except in the comparison with MOEA/D-ACO on the group of 200-car instances). The statistical results also suggest that the differences are significant in most cases. The MOPSO outperforms the pccsAMOPSO consistently on all groups of instances, whereas the advantage over MOEA/D-ACO diminishes as the problem size grows. The relatively higher performance of MOEA/D-ACO on larger instances reveals the benefit of the decomposition-based optimization approach which promotes diversification and thereby handles huge solution spaces effectively. Overall, the number of obtained solutions increases with the problem size, which is not surprising because increased number of cars and buffer lanes will create more opportunities of making compromises between the emission objective and the tardiness objective.
  • By taking a look at the D av and D max indicators, we find that the MOPSO has clearly outperformed both compared algorithms in terms of the average and maximum distances to the approximated Pareto frontier. In particular, the MOPSO has achieved the smallest average value of D av among the three algorithms on 89 out of the 120 instances, and has achieved the smallest average value of D max on 94 out of the 120 instances. The superior performance can be attributed to the enhanced search ability represented by the redesigned personal/global best handling mechanisms and some other problem-specific components of the algorithm. It is worthwhile to point out that the reference set R (used for calculating D av and D max in Equations (32) and ()), which represents the approximated Pareto frontier, is constructed by executing the two compared algorithms with sufficiently long iterations. This suggests that a bias has been created in favor of the MOEA/D-ACO and pccsAMOPSO. The remarkable performance of the proposed algorithm despite such an adverse condition is therefore quite convincing.
  • By observing the TS indicator, we notice that the MOPSO has achieved the smallest average value among the three algorithms on 90 out of the 120 instances. The superior performance can be attributed to the improved solution sorting mechanism in the MOPSO which relies on a precisely defined crowding distance measure. In particular, our distance measure is based on the concept of Euclidean distances, which complies with the definition of the TS indicator. By contrast, the MOEA/D-ACO does not incorporate an explicit sorting mechanism based on crowdedness and therefore it results in the poorest performance in terms of the TS indicator. The pccsAMOPSO algorithm utilizes a novel distance measure called the parallel cell distance to estimate the density of solutions, which turns out to be effective in the process of solving our problem, especially for relatively smaller-sized instances. Overall, our distance measure has been proved to work more effectively under the TS indicator because of its accuracy for characterizing the distribution of solutions in the objective space. Finally, a noticeable trend is that the TS values generally increase with the size of instances. The degradation reflects the exponential explosion of solution spaces which adds to the difficulty of obtaining evenly-spaced non-dominated solutions.
  • According to the CM indicator, it can be found that the average value of C ( X , Y ) stays above 0.90 and the average value of C ( X , Z ) stays above 0.94 across all the instances, which apparently means that a large portion of the solutions obtained by MOEA/D-ACO ( Y ) and pccsAMOPSO ( Z ) are dominated by or equal to (in terms of the objective vector) certain solutions output by the proposed algorithm ( X ). Meanwhile, the average values of C ( Y , X ) and C ( Z , X ) stay below 0.08 across all the instances, which indicates that the solution quality of the compared algorithms cannot match that of the proposed algorithm. To reveal the relative strengths of MOEA/D-ACO and pccsAMOPSO, we also report the average values of C ( Y , Z ) and C ( Z , Y ) . The comparative results show that the MOEA/D-ACO maintains an advantage over the pccsAMOPSO, and the superiority becomes more apparent as the size of instances grows (noting that C ( Y , Z ) increases from 0.58 for n = 50 to 0.71 for n = 200 and C ( Z , Y ) decreases from 0.37 for n = 50 to 0.25 for n = 200 ), which validates high flexibility of the decomposition-based approach.
  • As suggested by the statistical results given in Table 12 and Table 13 (one-tailed p-values), 29 of the 32 paired samples tested are significantly different in the statistical sense (under the significance level of 0.01). The 3 insignificant cases occur when we perform the test on the ONVG values. The underlying fact is that the number of non-dominated solutions obtained in each run of an algorithm is not very stable (it is more prone to random perturbations than other indicators). In general, the statistical results verify that the proposed MOPSO significantly outperforms the compared approaches for solving the bi-objective scheduling problem considered in this paper.

7. Conclusions

In this paper, we address an environment-aware production scheduling problem that arises in the car manufacturing industry. The problem has been defined as a bi-objective optimization model, in which one objective reflects the consideration of pollution-minimization requirements in the paint shop while the other objective characterizes the traditional goal of tardiness-minimization in the subsequent assembly shop. A mixed-integer linear programming formulation is developed to formally introduce the problem. Due to the high complexity and intractability, we devise a multi-objective particle swarm optimization algorithm (MOPSO) to solve the problem and obtain satisfactory production schedules within reasonable time. Since the MOPSO is aimed at optimizing the car sequence for the paint shop and the allocation of cars to the buffer lanes, the associated car sequencing decision for the assembly shop is treated as a separate subproblem. A branch-and-bound algorithm is proposed to solve this subproblem exactly, and also, a dispatching rule-based heuristic method is suggested to produce quick solutions for the subproblem. The former approach is utilized for identification of global best solutions in the MOPSO while the latter is used in all other situations when a solution needs to be evaluated. Such a design is inspired by the ordinal optimization philosophy (i.e., using crude models for solution evaluation will not lead to significant loss of optimal solutions) [44], which helps to bring down the computational burden of the whole algorithm.
In addition to the above considerations, the proposed MOPSO algorithm is characterized by the following important features:
  • A random key-based encoding scheme which facilitates PSO implementation;
  • A dedicated procedure for initialization of particles by exploiting problem-specific information;
  • A set of time-variant parameters which help to achieve a better balance of extensive exploration and intensive exploitation by means of adjusting the search patterns dynamically;
  • Some novel mechanisms to deal with multi-objective optimization (e.g., strategies for sorting solutions based on an accurately defined crowding distance measure and techniques for maintaining personal/global best solutions considering both Pareto dominance and diversity).
To test the proposed algorithm, we have conducted extensive computational experiments using 120 randomly generated instances of different sizes. In the optimality test, the MOPSO is able to produce high-quality solutions that are sufficiently close to the solutions obtained by the CPLEX solver for small-sized instances. In the principal experiments, the MOPSO is compared with two state-of-the-art multi-objective evolutionary algorithms under strictly the same computational time budget. The adopted performance indicators show that our algorithm has outperformed the compared approaches on a great majority of instances and in a statistically significant sense.
Future research will be focused on the following aspects. Firstly, it is interesting to incorporate the consideration of pollution-reduction requirements in other production units of automotive manufacturing systems, e.g., the body shop, which is responsible for building complete car bodies preceding the paint shop. Integrated scheduling of the entire production line will further contribute to reducing the overall pollutant emissions on the system level. Secondly, the optimization algorithm could be enhanced from several perspectives to ensure that it handles the integrated scheduling problem more efficiently. The central idea is to devise computationally fast local search components to be embedded into the MOPSO framework, especially the local search strategies that can make use of problem-specific structural properties (e.g., dominance property for a certain objective function).

Acknowledgments

This research is supported by the Natural Science Foundation of China under Grants Nos. 61473141 and U1660202.

Conflicts of Interest

The author declares no conflict of interest.

Notations

The following notations are used in this paper:
nnumber of cars
Enumber of color options
Lnumber of lanes in the buffer
d i position-based due date of car i
w i priority weight of car i
T i tardiness of car i in the assembly shop
δ e 1 e 2 amount of emission incurred when switching from color e 1 to color e 2
β e i a 0−1 constant which equals 1 iff car i requires color e
T P E total pollutant emissions in the paint shop
T W T total weighted tardiness in the assembly shop
L B lower bound of T W T used by the branch-and-bound algorithm
x i position of particle i in the MOPSO
v i velocity of particle i in the MOPSO
titeration index in the MOPSO
ω ( t ) time-variant inertia weight in the MOPSO
c 1 ( t ) , c 2 ( t ) time-variant acceleration coefficients in the MOPSO
b i personal best solution (associated with particle i) selected for position update
b global best solution selected for position update
B i personal best solution set maintained for particle i
Bglobal best solution set
m p maximum size of personal best solution sets
m g maximum size of global best solution set
Nnumber of particles in the MOPSO
u i crowding distance value for solution i in a non-dominated set
γ truncation parameter in the calculation of u i
ONVG number of non-dominated solutions in the obtained solution set
C M coverage metric which reflects the dominance relation between two solution sets
D av , D max distance metrics which depict how far the solutions are situated from the true Pareto frontier
T S spacing metric which describes how evenly the solutions in a set are distributed

References

  1. He, L.Y.; Ou, J.J. Pollution Emissions, Environmental Policy, and Marginal Abatement Costs. Int. J. Environ. Res. Public Health 2017, 14, 1509. [Google Scholar] [CrossRef] [PubMed]
  2. Zhang, R.; Chiong, R.; Michalewicz, Z.; Chang, P.C. Sustainable scheduling of manufacturing and transportation systems. Eur. J. Oper. Res. 2016, 3, 741–743. [Google Scholar] [CrossRef]
  3. Zhang, H.; Zhao, F.; Fang, K.; Sutherland, J.W. Energy-conscious flow shop scheduling under time-of-use electricity tariffs. CIRP Ann. Manuf. Technol. 2014, 63, 37–40. [Google Scholar] [CrossRef]
  4. Liu, C.H.; Huang, D.H. Reduction of power consumption and carbon footprints by applying multi-objective optimisation via genetic algorithms. Int. J. Prod. Res. 2014, 52, 337–352. [Google Scholar] [CrossRef]
  5. Zhou, L.; Xu, K.; Cheng, X.; Xu, Y.; Jia, Q. Study on optimizing production scheduling for water-saving in textile dyeing industry. J. Clean. Prod. 2017, 141, 721–727. [Google Scholar] [CrossRef]
  6. Spieckermann, S.; Gutenschwager, K.; Voß, S. A sequential ordering problem in automotive paint shops. Int. J. Prod. Res. 2004, 42, 1865–1878. [Google Scholar] [CrossRef]
  7. Moon, D.H.; Kim, H.S.; Song, C. A simulation study for implementing color rescheduling storage in an automotive factory. Simulation 2005, 81, 625–635. [Google Scholar] [CrossRef]
  8. Hartmann, S.A.; Runkler, T.A. Online optimization of a color sorting assembly buffer using ant colony optimization. In Proceedings of the 2007, Selected Papers of the Annual International Conference of the German Operations Research Society (GOR), Saarbrücken, 5–7 September 2007; Kalcsics, J., Nickel, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 415–420. [Google Scholar]
  9. Sun, H.; Fan, S.; Shao, X.; Zhou, J. A colour-batching problem using selectivity banks in automobile paint shops. Int. J. Prod. Res. 2015, 53, 1124–1142. [Google Scholar] [CrossRef]
  10. Ko, S.S.; Han, Y.H.; Choi, J.Y. Paint batching problem on M-to-1 conveyor systems. Comput. Oper. Res. 2016, 74, 118–126. [Google Scholar] [CrossRef]
  11. Epping, T.; Hochstättler, W.; Oertel, P. Complexity results on a paint shop problem. Discret. Appl. Math. 2004, 136, 217–226. [Google Scholar] [CrossRef]
  12. Xu, Y.; Zhou, J.G. A virtual resequencing problem in automobile paint shops. In Proceedings of the 22nd International Conference on Industrial Engineering and Engineering Management 2015: Core Theory and Applications of Industrial Engineering (Volume 1); Qi, E., Shen, J., Dou, R., Eds.; Atlantis Press: Paris, France, 2016; pp. 71–80. [Google Scholar]
  13. Amini, H.; Meunier, F.; Michel, H.; Mohajeri, A. Greedy colorings for the binary paintshop problem. J. Discret. Algorithms 2010, 8, 8–14. [Google Scholar] [CrossRef]
  14. Andres, S.D.; Hochstättler, W. Some heuristics for the binary paint shop problem and their expected number of colour changes. J. Discret. Algorithms 2011, 9, 203–211. [Google Scholar] [CrossRef]
  15. Sun, H.; Han, J. A study on implementing color-batching with selectivity banks in automotive paint shops. J. Manuf. Syst. 2017, 44, 42–52. [Google Scholar] [CrossRef]
  16. Parrello, B.D.; Kabat, W.C.; Wos, L. Job-shop scheduling using automated reasoning: A case study of the car-sequencing problem. J. Autom. Reason. 1986, 2, 1–42. [Google Scholar] [CrossRef]
  17. Kis, T. On the complexity of the car sequencing problem. Oper. Res. Lett. 2004, 32, 331–335. [Google Scholar] [CrossRef]
  18. Estellon, B.; Gardi, F. Car sequencing is NP-hard: A short proof. J. Oper. Res. Soc. 2013, 64, 1503–1504. [Google Scholar] [CrossRef]
  19. Golle, U.; Rothlauf, F.; Boysen, N. Iterative beam search for car sequencing. Ann. Oper. Res. 2015, 226, 239–254. [Google Scholar] [CrossRef]
  20. Zinflou, A.; Gagné, C.; Gravel, M. Genetic algorithm with hybrid integer linear programming crossover operators for the car-sequencing problem. INFOR Inform. Syst. Oper. Res. 2010, 48, 23–37. [Google Scholar] [CrossRef]
  21. Thiruvady, D.; Ernst, A.; Wallace, M. A Lagrangian-ACO matheuristic for car sequencing. EURO J. Comput. Optim. 2014, 2, 279–296. [Google Scholar] [CrossRef]
  22. Solnon, C.; Cung, V.D.; Nguyen, A.; Artigues, C. The car sequencing problem: Overview of state-of-the-art methods and industrial case-study of the ROADEF’2005 challenge problem. Eur. J. Oper. Res. 2008, 191, 912–927. [Google Scholar] [CrossRef]
  23. Estellon, B.; Gardi, F.; Nouioua, K. Two local search approaches for solving real-life car sequencing problems. Eur. J. Oper. Res. 2008, 191, 928–944. [Google Scholar] [CrossRef]
  24. Ribeiro, C.C.; Aloise, D.; Noronha, T.F.; Rocha, C.; Urrutia, S. A hybrid heuristic for a multi-objective real-life car sequencing problem with painting and assembly line constraints. Eur. J. Oper. Res. 2008, 191, 981–992. [Google Scholar] [CrossRef]
  25. Briant, O.; Naddef, D.; Mounié, G. Greedy approach and multi-criteria simulated annealing for the car sequencing problem. Eur. J. Oper. Res. 2008, 191, 993–1003. [Google Scholar] [CrossRef]
  26. Gavranović, H. Local search and suffix tree for car-sequencing problem with colors. Eur. J. Oper. Res. 2008, 191, 972–980. [Google Scholar] [CrossRef]
  27. Jahren, E.; Achá, R.A. A column generation approach and new bounds for the car sequencing problem. Ann. Oper. Res. 2017, in press. [Google Scholar] [CrossRef]
  28. Zhang, S.; Yu, D.; Shao, X.; Wang, S.; Zhang, C.; Lin, W. A novel artificial ecological niche optimization algorithm for car sequencing problem considering energy consumption. Proc. Inst. Mech. Eng. Part B 2015, 229, 546–562. [Google Scholar] [CrossRef]
  29. Yavuz, M. Iterated beam search for the combined car sequencing and level scheduling problem. Int. J. Prod. Res. 2013, 51, 3698–3718. [Google Scholar] [CrossRef]
  30. Chutima, P.; Olarnviwatchai, S. A multi-objective car sequencing problem on two-sided assembly lines. J. Intell. Manuf. 2017, in press. [Google Scholar] [CrossRef]
  31. Boysen, N.; Zenker, M. A decomposition approach for the car resequencing problem with selectivity banks. Comput. Oper. Res. 2013, 40, 98–108. [Google Scholar] [CrossRef]
  32. Pinedo, M.L. Scheduling: Theory, Algorithms, and Systems, 5th ed.; Springer: New York, NY, USA, 2016. [Google Scholar]
  33. Leung, J.Y.T.; Young, G.H. Minimizing total tardiness on a single machine with precedence constraints. ORSA J. Comput. 1990, 2, 346–352. [Google Scholar] [CrossRef]
  34. Banks, A.; Vincent, J.; Anyakoha, C. A review of particle swarm optimization. Part I: Background and development. Nat. Comput. 2007, 6, 467–484. [Google Scholar] [CrossRef]
  35. Vepsalainen, A.P.; Morton, T.E. Priority rules for job shops with weighted tardiness costs. Manag. Sci. 1987, 33, 1035–1047. [Google Scholar] [CrossRef]
  36. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  37. Van Veldhuizen, D.A.; Lamont, G.B. On measuring multiobjective evolutionary algorithm performance. In Proceedings of the IEEE Congress on Evolutionary Computation, La Jolla, CA, USA, 16–19 July 2000; Volume 1, pp. 204–211. [Google Scholar]
  38. Zitzler, E.; Thiele, L. Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach. IEEE Trans. Evol. Comput. 1999, 3, 257–271. [Google Scholar] [CrossRef]
  39. Ulungu, E.L.; Teghem, J.; Ost, C. Efficiency of interactive multi-objective simulated annealing through a case study. J. Oper. Res. Soc. 1998, 49, 1044–1050. [Google Scholar] [CrossRef]
  40. Tan, K.C.; Goh, C.K.; Yang, Y.J.; Lee, T.H. Evolving better population distribution and exploration in evolutionary multi-objective optimization. Eur. J. Oper. Res. 2006, 171, 463–495. [Google Scholar] [CrossRef]
  41. Ratnaweera, A.; Halgamuge, S.K.; Watson, H.C. Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans. Evol. Comput. 2004, 8, 240–255. [Google Scholar] [CrossRef]
  42. Ke, L.; Zhang, Q.; Battiti, R. MOEA/D-ACO: A multiobjective evolutionary algorithm using decomposition and ant colony. IEEE Trans. Cybern. 2013, 43, 1845–1859. [Google Scholar] [CrossRef] [PubMed]
  43. Hu, W.; Yen, G.G. Adaptive multiobjective particle swarm optimization based on parallel cell coordinate system. IEEE Trans. Evol. Comput. 2015, 19, 1–18. [Google Scholar]
  44. Horng, S.C.; Lin, S.S. An ordinal optimization theory-based algorithm for a class of simulation optimization problems and application. Expert Syst. Appl. 2009, 36, 9340–9349. [Google Scholar] [CrossRef]
Figure 1. Illustration of a production system for automotive painting and assembly.
Figure 1. Illustration of a production system for automotive painting and assembly.
Ijerph 15 00032 g001
Figure 2. An example of selectivity bank with two lanes.
Figure 2. An example of selectivity bank with two lanes.
Ijerph 15 00032 g002
Figure 3. Example search process of the branch-and-bound algorithm.
Figure 3. Example search process of the branch-and-bound algorithm.
Ijerph 15 00032 g003
Figure 4. Flowchart of the proposed MOPSO algorithm.
Figure 4. Flowchart of the proposed MOPSO algorithm.
Ijerph 15 00032 g004
Table 1. Job data of the example instance.
Table 1. Job data of the example instance.
Job Index (j)1234
Weight ( w j )5183
Due date ( d j )2211
Table 2. Illustration of the solution decoding scheme.
Table 2. Illustration of the solution decoding scheme.
i12345678
x i 1.802.190.211.320.952.051.540.82
( k i , l i ) ( 6 , 2 ) ( 2 , 3 ) ( 3 , 1 ) ( 4 , 2 ) ( 8 , 1 ) ( 1 , 3 ) ( 5 , 2 ) ( 7 , 1 )
Table 3. Comparison between MOPSO and CPLEX (with weighted sum method) based on the instances with ( n , E , L ) = ( 50 , 3 , 10 ) .
Table 3. Comparison between MOPSO and CPLEX (with weighted sum method) based on the instances with ( n , E , L ) = ( 50 , 3 , 10 ) .
No.ONVG D av D max TSCM
MOPSOCPLEXMOPSOCPLEXMOPSOCPLEXMOPSOCPLEX C 1 C 2
19.8612.000.0090.0000.0180.0001.441.330.401.00
212.7314.000.0110.0020.0280.0121.321.370.430.98
310.4612.000.0130.0000.0300.0001.301.440.471.00
410.9113.000.0080.0000.0240.0001.421.400.391.00
512.6814.000.0160.0040.0340.0101.491.300.540.98
Avg.11.3313.000.0110.0010.0270.0041.391.370.450.99
Note: C 1 = C ( X , Y ) , C 2 = C ( Y , X ) , where X (resp. Y ) represents the solution set output by MOPSO (resp. CPLEX).
Table 4. Comparison of MOPSO with MOEA/D-ACO and pccsAMOPSO on the test instances with 50 cars.
Table 4. Comparison of MOPSO with MOEA/D-ACO and pccsAMOPSO on the test instances with 50 cars.
No.SizeMOPSOMOEA/D-ACOpccsAMOPSO
ONVG D av D max TSONVG D av D max TSONVG D av D max TS
1 ( 50 , 3 , 10 ) 9.860.0090.0221.4410.970.0600.1511.9610.480.0700.1721.50
2*12.730.0200.0471.329.490.0430.0741.699.470.0540.1422.15
3*10.460.0210.0451.3012.010.0630.1881.3711.110.0400.1031.30
4*10.910.0130.0291.4210.230.0320.0651.7710.970.0930.1991.79
5*12.680.0170.0371.499.510.0610.1791.639.300.0400.0831.69
6 ( 50 , 3 , 15 ) 10.200.0260.0591.4811.010.0320.0901.4010.560.0700.1352.48
7*10.790.0180.0421.4310.700.0220.0472.2611.270.0360.0692.26
8*11.990.0320.0731.3411.690.0180.0442.1411.870.0220.0471.87
9*10.600.0160.0361.2610.590.0530.0961.7610.120.0580.1461.92
10*12.750.0240.0441.4813.000.0270.0691.4113.050.0890.1631.81
11 ( 50 , 3 , 20 ) 12.200.0200.0311.1811.340.0160.0321.1811.700.0940.2711.94
12*12.420.0180.0401.4512.680.0430.0882.2912.900.0850.1461.32
13*12.260.0200.0411.2611.970.0130.0361.4912.200.0410.1132.10
14*10.790.0200.0311.3710.630.0480.1291.3511.060.0570.1291.71
15*13.880.0300.0511.3712.530.0530.1371.6513.660.0650.1191.87
16 ( 50 , 6 , 10 ) 11.790.0280.0611.3411.550.0600.1491.5112.070.0240.0671.30
17*12.940.0280.0651.4811.510.0200.0491.7613.200.0530.1201.52
18*11.130.0220.0491.4910.200.0590.1231.6711.520.0650.1481.47
19*12.360.0140.0221.4012.020.0650.1051.5111.410.0260.0461.40
20*12.460.0300.0611.3613.350.0430.1231.6212.200.0530.1181.25
21 ( 50 , 6 , 15 ) 15.320.0300.0591.2716.200.0140.0241.7615.210.0300.0741.79
22*12.840.0190.0471.2811.530.0590.1581.5312.350.0440.0811.83
23*13.550.0100.0221.4813.880.0170.0291.7113.700.0850.2501.97
24*13.880.0200.0331.1913.840.0580.1231.8812.770.0390.0941.75
25*12.370.0110.0271.4712.290.0350.0941.5711.370.0910.1771.52
26 ( 50 , 6 , 20 ) 13.220.0100.0181.2612.310.0430.1231.6513.140.0690.1302.07
27*14.580.0280.0681.3512.820.0650.1481.5514.210.0590.1631.28
28*12.170.0320.0601.3910.980.0250.0762.2212.130.0770.1351.85
29*10.600.0280.0641.2810.580.0630.1201.739.740.0600.1761.97
30*13.900.0140.0221.2115.060.0140.0311.3214.350.0630.1701.48
Avg. 12.260.0210.0441.3611.880.0410.0971.6811.970.0580.1331.74
Table 5. Comparison of MOPSO with MOEA/D-ACO and pccsAMOPSO on the test instances with 100 cars.
Table 5. Comparison of MOPSO with MOEA/D-ACO and pccsAMOPSO on the test instances with 100 cars.
No.SizeMOPSOMOEA/D-ACOpccsAMOPSO
ONVG D av D max TSONVG D av D max TSONVG D av D max TS
31 ( 100 , 6 , 10 ) 16.750.0370.0751.4315.480.0620.1691.6416.010.1310.2651.52
32*16.720.0480.1071.4217.620.0340.0571.8113.850.0230.0441.87
33*15.380.0380.0931.3515.620.0440.0851.6314.010.1230.3281.60
34*16.530.0440.0851.3614.980.0260.0461.3415.200.0950.1831.34
35*15.660.0260.0561.3614.570.0730.1691.5415.280.0330.0951.49
36 ( 100 , 6 , 15 ) 15.970.0470.0981.4915.590.0670.1092.1912.910.0760.2031.90
37*17.020.0270.0611.2618.500.0830.1711.2516.190.0440.0801.34
38*16.250.0350.0791.3014.650.0790.2101.2114.680.0940.2401.29
39*14.860.0370.0651.6914.540.0740.2152.5112.060.0570.1361.72
40*17.600.0360.0781.5516.610.0590.1661.4515.230.0610.1511.49
41 ( 100 , 6 , 20 ) 17.480.0490.1091.5215.810.0530.1231.6516.490.0230.0591.57
42*18.980.0450.0761.3717.820.0670.1472.0417.860.1150.2921.37
43*17.140.0410.0761.6616.010.0520.1021.5216.640.0620.1261.92
44*17.060.0270.0641.5317.880.0790.1961.5014.320.1290.2281.59
45*15.760.0330.0531.4414.290.0730.1511.6813.920.0710.1811.62
46 ( 100 , 10 , 10 ) 16.820.0360.0881.3516.770.0300.0851.5614.410.1440.3681.65
47*16.630.0290.0451.7017.420.0670.1231.8313.330.0850.1591.92
48*17.700.0230.0371.3017.550.0270.0801.4616.120.0890.2341.45
49*18.590.0350.0871.4219.180.0470.0901.9716.790.0300.0811.47
50*20.800.0390.0831.5621.430.0300.0562.2416.990.0610.1751.75
51 ( 100 , 10 , 15 ) 19.050.0350.0551.4019.000.0630.1701.3118.370.0760.1631.87
52*18.390.0410.0711.7216.960.0420.0861.9717.770.0990.2001.97
53*17.860.0410.0741.5917.490.0720.1411.7316.610.1400.3171.58
54*17.440.0260.0521.5315.790.0500.1301.5815.820.0240.0661.80
55*16.380.0390.0621.7014.610.0240.0682.0814.970.1100.2001.86
56 ( 100 , 10 , 20 ) 15.760.0200.0311.3617.050.0680.1171.3413.990.0340.0801.52
57*21.070.0210.0331.3922.250.0610.1311.9219.700.1180.2131.50
58*20.310.0500.1161.2721.470.0840.1901.3216.370.0550.1311.52
59*16.490.0460.0991.7215.380.0780.1731.7114.120.1150.3101.84
60*22.240.0430.0901.6423.250.0490.1322.3418.440.1240.2241.83
Avg. 17.490.0360.0731.4817.190.0570.1301.7115.610.0810.1841.64
Table 6. Comparison of MOPSO with MOEA/D-ACO and pccsAMOPSO on the test instances with 150 cars.
Table 6. Comparison of MOPSO with MOEA/D-ACO and pccsAMOPSO on the test instances with 150 cars.
No.SizeMOPSOMOEA/D-ACOpccsAMOPSO
ONVG D av D max TSONVG D av D max TSONVG D av D max TS
61 ( 150 , 9 , 10 ) 18.040.0610.1501.4116.510.0600.1561.7215.540.1180.2161.47
62*18.420.0560.0861.4218.790.0950.2591.4616.620.1180.2601.59
63*19.800.0480.0941.6717.930.1180.3052.2116.490.0650.1821.67
64*17.390.0230.0461.4918.010.0330.0871.5614.700.0290.0791.61
65*17.150.0660.1181.5816.630.0930.1861.8414.800.1250.2321.68
66 ( 150 , 9 , 15 ) 21.600.0360.0871.3722.490.0370.0721.9319.870.0440.1021.96
67*20.530.0230.0391.4221.350.1300.3771.6617.880.1730.4041.54
68*17.230.0640.1541.5116.100.1410.2692.2614.390.1070.2981.55
69*21.430.0480.0801.5919.340.1360.2252.5819.690.1070.2542.06
70*19.090.0530.1001.6319.850.0780.1722.3717.320.0620.1141.81
71 ( 150 , 9 , 20 ) 21.410.0200.0411.4421.220.1210.2921.5719.130.0780.1362.14
72*23.230.0550.1321.4221.640.0650.1951.9420.670.0690.1312.12
73*18.090.0280.0501.5017.240.0750.1822.1716.110.1700.4001.65
74*23.270.0440.0851.5923.700.0460.0952.0019.930.1510.3262.07
75*19.210.0280.0491.6518.970.1350.2482.4717.130.1400.2512.43
76 ( 150 , 12 , 10 ) 19.570.0550.1111.5718.540.0420.0871.7816.720.1380.2461.50
77*22.060.0280.0511.6421.790.0540.1062.5617.950.0510.1291.77
78*22.100.0450.0781.5520.800.0530.1471.9619.870.0350.0811.92
79*23.500.0190.0401.3522.750.1590.4651.6121.010.1120.1931.92
80*18.640.0190.0431.4919.260.0330.0981.8116.950.1510.3142.26
81 ( 150 , 12 , 15 ) 20.830.0610.0931.6019.480.0340.0941.7618.610.1050.2951.60
82*20.190.0400.0971.6620.780.0850.2112.6318.120.0800.1712.34
83*21.870.0490.1131.5022.240.0440.0861.8518.740.1020.2571.87
84*22.710.0440.0721.5321.920.1230.2602.2220.510.1050.2262.04
85*22.780.0320.0731.3822.670.0920.2272.2020.030.0650.1381.54
86 ( 150 , 12 , 20 ) 26.750.0400.0921.4026.980.1190.3031.3822.770.0860.2111.66
87*21.400.0650.1471.5021.740.0980.2192.2019.360.1600.4341.79
88*21.600.0250.0461.6720.850.1190.2652.0717.760.1030.1931.87
89*21.150.0600.1041.3820.210.0330.0861.6518.920.0510.1261.71
90*23.540.0410.0991.7022.740.1610.4242.2521.610.1110.2862.22
Avg. 20.820.0430.0861.5220.420.0870.2071.9918.310.1000.2231.85
Table 7. Comparison of MOPSO with MOEA/D-ACO and pccsAMOPSO on the test instances with 200 cars.
Table 7. Comparison of MOPSO with MOEA/D-ACO and pccsAMOPSO on the test instances with 200 cars.
No.SizeMOPSOMOEA/D-ACOpccsAMOPSO
ONVG D av D max TSONVG D av D max TSONVG D av D max TS
91 ( 200 , 10 , 10 ) 22.750.0440.1001.5223.920.0280.0671.5720.820.1170.2371.71
92*18.640.0910.2131.4418.880.0440.1271.6117.440.1240.2191.64
93*19.810.0580.1251.4420.160.1610.4361.7517.970.1880.5491.56
94*19.020.0850.1551.6818.800.1490.3801.8317.440.2270.3961.81
95*18.640.0250.0431.6719.600.0880.2282.2716.640.1610.2962.03
96 ( 200 , 10 , 15 ) 19.240.0240.0431.6020.350.0550.1162.5018.310.0320.0902.01
97*21.940.0780.1421.3923.150.0390.1021.8520.540.1030.2181.67
98*21.330.0740.1161.5320.280.0920.1592.0520.560.0700.1521.69
99*22.710.0940.1671.7021.360.0370.0891.9221.360.1210.2642.32
100*21.790.0790.1241.7822.280.1140.3252.5519.480.0230.0681.71
101 ( 200 , 10 , 20 ) 21.580.0540.0871.6721.540.0580.1452.1118.750.0560.1331.91
102*24.340.0510.1111.4926.850.1590.4362.3620.580.1330.3851.65
103*22.380.0900.1771.7822.150.2190.3652.0720.920.2110.4912.36
104*20.740.0650.1601.6822.110.1630.3401.8820.050.1570.3031.82
105*22.510.0380.0651.8721.060.1420.3222.1320.510.1310.2881.92
106 ( 200 , 15 , 10 ) 27.130.0870.1721.6526.580.1030.2052.5923.340.1410.2491.73
107*26.670.0660.1201.6628.610.0520.0912.6924.310.2070.4211.67
108*22.950.0370.0891.7024.090.1830.5161.9219.770.0410.1082.20
109*21.410.0250.0621.5019.990.2090.6061.7818.560.1620.3512.10
110*24.860.0950.2101.7223.400.1900.5202.4624.050.1210.2441.61
111 ( 200 , 15 , 15 ) 21.150.0940.2301.8421.810.1100.3102.6517.860.0970.2541.82
112*22.020.0620.1011.9122.340.0830.2442.3420.890.2220.5131.88
113*28.670.0780.1581.4130.250.0880.1451.8825.610.1140.3161.43
114*20.170.0820.1701.4021.500.1450.4211.5217.830.2030.5151.39
115*24.160.0550.0971.7725.850.0970.1722.1720.580.1640.4762.00
116 ( 200 , 15 , 20 ) 26.080.0620.1481.5627.060.0430.1152.3424.230.1280.3481.95
117*23.030.0850.1851.7223.670.0820.1642.3322.100.0560.1121.69
118*23.170.0750.1221.4023.460.2180.5451.7220.240.1740.4221.84
119*27.550.0320.0601.5129.950.1310.3621.9926.510.1120.3211.86
120*24.760.0890.1721.8425.140.1700.4602.8421.970.0950.1661.97
Avg. 22.710.0660.1311.6323.210.1150.2842.1220.640.1300.2971.83
Table 8. Coverage metrics for comparing the algorithms on the test instances with 50 cars.
Table 8. Coverage metrics for comparing the algorithms on the test instances with 50 cars.
No.Size C ( X , Y ) C ( Y , X ) C ( X , Z ) C ( Z , X ) C ( Y , Z ) C ( Z , Y )
1 ( 50 , 3 , 10 ) 1.000.000.880.100.930.01
2*0.910.090.950.040.520.43
3*0.860.120.980.000.410.59
4*0.930.050.880.100.440.55
5*0.870.100.870.110.370.57
6 ( 50 , 3 , 15 ) 0.870.111.000.000.680.26
7*0.990.000.940.040.860.11
8*0.930.040.870.110.620.36
9*0.850.141.000.000.620.33
10*0.870.090.960.010.420.54
11 ( 50 , 3 , 20 ) 0.880.060.880.100.390.53
12*0.850.130.880.090.710.21
13*0.930.031.000.000.390.58
14*0.970.020.890.070.410.57
15*0.980.010.950.020.400.59
16 ( 50 , 6 , 10 ) 0.970.000.930.070.590.36
17*1.000.001.000.000.890.05
18*0.910.090.910.060.560.42
19*0.870.090.850.150.430.50
20*1.000.000.890.061.000.00
21 ( 50 , 6 , 15 ) 0.970.010.890.080.600.38
22*0.860.121.000.000.490.46
23*0.900.060.840.150.550.38
24*0.910.060.990.000.610.33
25*0.910.021.000.000.410.51
26 ( 50 , 6 , 20 ) 0.860.110.910.040.970.00
27*0.820.180.980.000.690.24
28*0.940.040.980.000.600.32
29*0.810.150.850.130.540.38
30*0.880.110.950.000.420.56
Avg. 0.910.070.930.050.580.37
Table 9. Coverage metrics for comparing the algorithms on the test instances with 100 cars.
Table 9. Coverage metrics for comparing the algorithms on the test instances with 100 cars.
No.Size C ( X , Y ) C ( Y , X ) C ( X , Z ) C ( Z , X ) C ( Y , Z ) C ( Z , Y )
31 ( 100 , 6 , 10 ) 0.870.120.920.030.700.23
32*0.810.141.000.000.650.34
33*0.820.151.000.000.530.46
34*0.850.100.940.040.760.21
35*0.990.001.000.000.780.16
36 ( 100 , 6 , 15 ) 0.980.000.890.100.780.21
37*0.900.081.000.000.930.00
38*0.990.000.840.110.440.49
39*0.960.000.960.010.780.15
40*0.980.000.920.080.440.48
41 ( 100 , 6 , 20 ) 0.900.091.000.000.390.58
42*0.820.130.860.090.780.16
43*0.820.130.870.090.380.54
44*0.920.060.920.030.570.43
45*0.980.010.870.090.650.31
46 ( 100 , 10 , 10 ) 1.000.000.860.110.810.15
47*0.990.000.910.040.410.54
48*0.840.150.900.050.420.52
49*0.980.010.950.010.600.32
50*0.880.070.900.070.730.21
51 ( 100 , 10 , 15 ) 0.930.040.960.000.900.05
52*0.880.101.000.000.920.05
53*0.950.001.000.000.520.42
54*1.000.000.880.120.970.03
55*0.910.020.950.040.940.02
56 ( 100 , 10 , 20 ) 1.000.000.930.040.470.46
57*0.830.130.980.000.690.26
58*0.880.110.960.020.430.50
59*1.000.000.990.010.610.35
60*0.980.011.000.000.600.39
Avg. 0.920.050.940.040.650.30
Table 10. Coverage metrics for comparing the algorithms on the test instances with 150 cars.
Table 10. Coverage metrics for comparing the algorithms on the test instances with 150 cars.
No.Size C ( X , Y ) C ( Y , X ) C ( X , Z ) C ( Z , X ) C ( Y , Z ) C ( Z , Y )
61 ( 150 , 9 , 10 ) 0.830.121.000.000.550.37
62*0.970.001.000.000.910.08
63*0.920.041.000.000.940.02
64*0.970.020.950.010.370.61
65*1.000.000.880.080.810.13
66 ( 150 , 9 , 15 ) 0.830.140.930.040.740.19
67*0.880.111.000.000.370.61
68*0.950.041.000.000.810.18
69*0.860.120.880.070.570.42
70*0.940.011.000.000.550.38
71 ( 150 , 9 , 20 ) 0.840.160.920.050.650.29
72*0.900.080.930.050.390.61
73*0.840.110.900.100.470.49
74*1.000.000.920.030.960.00
75*0.960.010.860.120.740.20
76 ( 150 , 12 , 10 ) 0.990.000.950.000.410.56
77*1.000.000.980.000.610.31
78*0.960.001.000.000.570.43
79*0.820.160.930.060.790.14
80*0.910.070.960.021.000.00
81 ( 150 , 12 , 15 ) 0.810.180.850.120.930.00
82*0.870.121.000.000.390.52
83*0.830.140.880.090.620.37
84*0.910.080.930.020.730.22
85*0.820.160.960.010.730.22
86 ( 150 , 12 , 20 ) 0.890.100.890.050.780.15
87*0.910.050.900.060.400.52
88*0.860.100.920.030.440.53
89*0.870.081.000.000.790.17
90*0.860.080.960.010.830.15
Avg. 0.900.080.940.030.660.30
Table 11. Coverage metrics for comparing the algorithms on the test instances with 200 cars.
Table 11. Coverage metrics for comparing the algorithms on the test instances with 200 cars.
No.Size C ( X , Y ) C ( Y , X ) C ( X , Z ) C ( Z , X ) C ( Y , Z ) C ( Z , Y )
91 ( 200 , 10 , 10 ) 0.850.111.000.000.680.29
92*0.960.010.930.030.850.14
93*0.810.171.000.000.650.35
94*0.840.151.000.000.890.11
95*0.960.000.930.030.730.21
96 ( 200 , 10 , 15 ) 0.880.111.000.000.720.23
97*0.960.000.870.120.890.04
98*0.820.150.940.020.410.51
99*0.810.151.000.000.670.29
100*0.850.110.890.060.840.13
101 ( 200 , 10 , 20 ) 0.860.130.920.070.960.00
102*0.980.001.000.000.550.42
103*1.000.001.000.000.990.00
104*0.930.050.920.040.860.06
105*0.980.010.860.090.910.02
106 ( 200 , 15 , 10 ) 0.940.031.000.000.390.56
107*0.960.001.000.000.980.00
108*0.970.000.870.120.880.07
109*0.950.021.000.000.390.59
110*0.860.120.950.020.800.12
111 ( 200 , 15 , 15 ) 0.910.061.000.000.490.49
112*0.820.150.850.130.400.54
113*0.920.020.870.130.980.00
114*0.970.000.990.000.640.28
115*0.840.121.000.000.920.06
116 ( 200 , 15 , 20 ) 0.840.110.920.040.420.55
117*0.860.120.970.000.900.07
118*0.810.150.890.110.450.48
119*0.870.131.000.000.530.42
120*0.810.130.900.060.560.38
Avg. 0.890.080.950.040.710.25
Table 12. The p-values resulting from the t-tests for MOPSO and MOEA/D-ACO.
Table 12. The p-values resulting from the t-tests for MOPSO and MOEA/D-ACO.
nONVG D av D max TS
50 4 . 14 × 10 2 4 . 06 × 10 6 1 . 16 × 10 6 2 . 90 × 10 7
100 6 . 66 × 10 2 4 . 47 × 10 6 1 . 87 × 10 6 4 . 77 × 10 5
150 8 . 49 × 10 3 5 . 85 × 10 6 1 . 74 × 10 6 4 . 30 × 10 10
200 9 . 87 × 10 3 8 . 14 × 10 5 1 . 28 × 10 5 1 . 40 × 10 10
Table 13. The p-values resulting from the t-tests for MOPSO and pccsAMOPSO.
Table 13. The p-values resulting from the t-tests for MOPSO and pccsAMOPSO.
nONVG D av D max TS
50 5 . 74 × 10 2 1 . 64 × 10 9 3 . 20 × 10 9 5 . 99 × 10 7
100 3 . 93 × 10 11 2 . 89 × 10 7 5 . 79 × 10 8 2 . 88 × 10 7
150 4 . 19 × 10 19 1 . 91 × 10 8 8 . 94 × 10 9 5 . 48 × 10 8
200 6 . 59 × 10 13 8 . 72 × 10 7 3 . 58 × 10 7 4 . 49 × 10 6

Share and Cite

MDPI and ACS Style

Zhang, R. Environment-Aware Production Scheduling for Paint Shops in Automobile Manufacturing: A Multi-Objective Optimization Approach. Int. J. Environ. Res. Public Health 2018, 15, 32. https://doi.org/10.3390/ijerph15010032

AMA Style

Zhang R. Environment-Aware Production Scheduling for Paint Shops in Automobile Manufacturing: A Multi-Objective Optimization Approach. International Journal of Environmental Research and Public Health. 2018; 15(1):32. https://doi.org/10.3390/ijerph15010032

Chicago/Turabian Style

Zhang, Rui. 2018. "Environment-Aware Production Scheduling for Paint Shops in Automobile Manufacturing: A Multi-Objective Optimization Approach" International Journal of Environmental Research and Public Health 15, no. 1: 32. https://doi.org/10.3390/ijerph15010032

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop