1. Introduction
The manufacturing industry has been regarded as an intensive energy consumer. To achieve sustainable development goals, a number of regulations are urging the manufacturers to adopt energy-saving measures. In addition to an upgrade of production equipment, which is usually costly, the possibility of using soft techniques to achieve energy savings should be emphasized. In fact, production scheduling could play a significant role in reducing the energy consumption of manufacturing processes. However, the traditional production scheduling software, which is often embedded in the company’s ERP or MES, has been designed solely for the productivity or profitability goal, without consideration of sustainability impacts. To take energy consumption and other environmental factors into consideration, the scheduling models and algorithms need to be significantly revised or even rewritten. In the redevelopment process, a major difficulty is that the energy model, which characterizes the power consumption of manufacturing activities, is highly industry-dependent and factory-dependent, and therefore, energy-efficient production scheduling algorithms are not universally applicable and have to be designed specifically for each company. Currently, some energy-intensive manufacturers (like steel companies) have adopted energy-saving production scheduling techniques, and the procedure is waiting to be extended and applied to other industries. Another issue that prevents the application of such scheduling technologies is the high computational complexity. The simultaneous consideration of energy-related aspects will complicate the scheduling model (leading to more constraints and variables), which requires more powerful optimization algorithms to solve. This poses a theoretical challenge to the operations research and management science community.
Energy-efficient production scheduling problems have been extensively studied by a number of researchers in recent years [
1,
2,
3,
4]. In the existing research, considerations for energy efficiency can be divided into two categories, i.e., energy-related objectives and energy-related constraints. Wang et al. [
1] study a bi-objective single machine batch scheduling problem with non-identical job sizes. Zhang and Chiong [
5] propose an enhanced multi-objective genetic algorithm to solve the job shop scheduling problem minimizing total energy consumption and total weighted tardiness. Wu et al. [
6] investigate the flexible job-shop scheduling problem with the objective of minimizing total energy consumption and makespan. They consider the deterioration effect as well. Liao et al. [
7] use MOPSO to address the bi-objective single machine scheduling problem with energy consumption constraints. Módos et al. [
8] consider a production scheduling problem with large electricity consumption. The energy-related constraint requires that, in the specified time intervals, the total energy consumption should not exceed a given limit.
Just-in-time (JIT) objectives are commonly considered in production scheduling problems. Under the JIT logic, both earliness and tardiness (with respect to due dates) should be penalized [
9,
10]. The former can cause inventory holding costs such as storage cost and extra delivering cost (especially for perishable goods), while the latter can cause loss of reputation, sale and may even cause penalty.
In real-world manufacturing systems, the processing times of certain jobs can be longer than their nominal values because of the deterioration effect (deterioration effect refers to the phenomenon that a job requires longer processing time if its starting time is postponed or later than expected). There are varieties of factors that may cause deterioration such as high workload of machines, insertion of maintenance activities, and fatigue of human operators [
11,
12]. Deterioration can also be explained by industry-specific reasons, for example, a steel slab in the hot rolling production process has to be reheated (which takes extra processing time) if it has been kept waiting for a long time which brings a significant decline in temperature [
13].
To the best of our knowledge, there are no results in the existing literature for production scheduling problems that integrate the JIT objective, the energy-saving requirement and explicit consideration of the deterioration effect. Production systems with the above characteristics are commonly observed in mechanical and metal manufacturing companies. Note that the single-machine JIT scheduling problem with a deterioration effect is
-hard [
14], which means the time required by an exact algorithm to solve the problem increases exponentially with the number of jobs. Hence, a multi-objective particle swarm optimization algorithm enhanced by local search (MOPSO-LS) is presented in this paper to obtain near-optimal solutions for the discussed problem.
The rest of this paper is organized as follows.
Section 2 introduces the basic definitions and notations for the bi-objective single-machine scheduling problem with deterioration.
Section 3 describes the proposed MOPSO-LS algorithm. Computational results are shown in
Section 4. Conclusions and future work are presented in
Section 5.
3. The Proposed Algorithm
3.1. The Basic PSO Algorithm
The proposed algorithm adopts MOPSO as the main framework and incorporates an enhanced local search method to promote efficiency. We first make a brief introduction to the basic principles of PSO. The PSO algorithm is a population-based stochastic optimization technique inspired by the social behavior of bird flocking or fish schooling. It is a well-known meta-heuristic algorithm for solving both continuous and combinatorial optimization problems.
Firstly, the algorithm generates a population of particles and the position of each particle represents a potential solution in the search space for the considered problem. According to the evaluation of the objective function to be optimized, each particle has a fitness value. Then, the fitness determines the flying direction and speed of the particles. In each iteration, two types of best positions are preserved. One is called the personal best position () which represents the best solution that each particle has achieved so far. The other is called the global best position () which represents the best position obtained so far by all the particles. Finally, the particles learn from these two best positions based on their current flying routes, and the positions of the particles get updated in each iteration. The iterations continue until the convergence criterion is satisfied.
3.2. Encoding and Decoding
We use the largest position value (LPV) rule as the decoding method, which transforms continuously encoded particles into discrete solutions. Simply speaking, the decoded solution records the relative order of the corresponding position values (from largest to smallest). For example, for a particle , its corresponding job sequence is , because the fourth value in is the largest (rank 1) and the third value in is the smallest (rank 8).
3.3. Solution Initialization
Generally, the initial population is generated by specific heuristic rules combined with some random methods to get high-quality and distinct solutions. However, to encourage solution diversity and to facilitate the evaluation of the k-opt neighborhood operations, the individuals of the initial population are completely randomized in our proposed algorithm.
3.4. Sorting of Solutions
In multi-objective optimization settings, Pareto dominance is the main criterion for distinguishing the quality of different solutions. For our research, both objectives aim at finding the minimum. In this case, solution with objective values is said to dominate solution with objective values if either one of the following two conditions is satisfied:
The relation will be denoted by . Two solutions are treated equally in the same Pareto rank if they are mutually non-dominated. All the non-dominated solutions are stored in an external archive.
In addition to Pareto dominance, the adaptive grid method proposed by Coello Coello [
15] is used to sort solutions that are not mutually dominated. The aim is to avoid over-crowded solutions and obtain evenly distributed non-dominated solutions for the decision maker.
Figure 2 shows the insertion of a new solution in the adaptive grid with grid size
.
3.5. External Repository and Updating Mechanisms
In standard PSO intended for single-objective optimization, either the personal best solution associated with each particle or the global best solution is an individual solution. However, it is necessary to preserve a number of equally optimal (non-dominated) solutions when addressing multi-objective optimization problems. For this purpose, we build an external repository (denoted by a set ) to save all the historical non-dominated solutions found in the search process.
The maintenance rules are described as follows. In each iteration, if the newly produced solutions are all dominated by some existing solutions in , nothing happens. If there are any new non-dominated solutions, then insert all of them into the repository and the dominated solutions originally in the repository are eliminated.
3.6. The Mutation Operator
We apply a mutation operator to enhance the global searching capability of MOPSO (the concept of mutation is adapted from genetic algorithm, representing a perturbation technique which alters the current solutions slightly in the hope of finding better solutions). The mutation operator is applied to the particles of the swarm after the position updating process. Specifically, we use a mutation index (represented as
) to control the probability of mutation. When
is a positive real number, the mutation operator will be applied to each particle with a probability of
. In the selected particle, the mutation operator changes the value of a randomly identified position to a real number generated from the uniform distribution whose interval is centered at its original value with
as its radius. The
and
mentioned above can be calculated as follows:
where
and
represents the current iteration number and the maximum iteration number of MOPSO,
is the mutation rate defined in MOPSO, and
is set at 4 in our algorithm.
3.7. Local Search
The two-opt neighborhood operator, which reverses one segment of the sequence, is the most commonly used operator to generate neighboring solutions for routing problems. The idea can be extended to segments (k-opt). However, the operator needs to be modified in our research since a scheduling problem is different from a routing problem.
We define the
k-opt neighborhood operations as follows. For a given sequence, we select
segments randomly, and then we reverse some of these segments. Since each of the segments may or may not be reversed, there are a number of possible combinations. If the number of selected segments is
, there are
ways to rearrange a schedule. For example, when considering 2 segments (3-opt), there are three possible outcome solutions (
Figure 3). Similarly, for the case of three segments (four-opt), there are seven possible outcome solutions.
It is supposed that the selected segments do not have any overlap, because in the case of overlap, the order of reversing the segments will affect the final solutions. For example, consider two segments with an overlap (shown in
Figure 4). For the original sequence
, the first segment is from the second job to the sixth job and the second segment is from the fifth job to the seventh job. If we reverse the first segment first, we get the sequence
, and then we reverse the second segment which is currently filled with jobs
. However, if we reverse the second segment first, we get the sequence
, and then we reverse the first segment which is currently filled with jobs
. The resulting solutions will certainly be different in these two cases. To avoid confusion, we assume the selected segments for
k-opt operations are overlap-free.
After the solution set in external repository has been updated in each iteration, the k-opt operator is performed on one of the solutions in . All solutions have equal probabilities of being selected. The new solutions produced by this operation will be added to . Then all the solutions will be evaluated and saved according to the updating mechanisms introduced before.
3.8. MOPSO-LS Framework
The entire algorithm structure of the MOPSO-LS is shown in
Figure 5. Compared with the standard PSO workflow, our algorithm features a novel mutation operator and local search module, which greatly enhances the exploitation ability for coping with large-scale search space. In addition, the mechanisms for handling multi-objective optimization are also important contributors to the overall algorithm but not directly reflected in the flowchart.
5. Conclusions
This paper investigates the single-machine scheduling problem with deterioration to minimize two objectives simultaneously, i.e., the total weighed earliness/tardiness and the total energy consumption. The first objective reflects the just-in-time management philosophy and the second objective reflects the motivation to pursue sustainable and green manufacturing.
We propose a hybrid meta-heuristic algorithm called MOPSO-LS which enhances MOPSO by a dedicated local search procedure to solve the problem. The local search strategy is built on the idea of k-opt neighborhood operators and has been adapted for the scheduling problem. To evaluate the effectiveness and efficiency of the proposed algorithm, computational experiments are conducted and comparisons are made between MOPSO-LS, MOPSO and NSGA-II. The results show that the MOPSO-LS has faster convergence speed and better solution quality compared to the other two algorithms. Besides, it is shown that the local search procedure based on 2-opt, 3-opt and 4-opt have improved the search ability of MOPSO significantly.
In future research, we will focus on more advanced energy models and more sophisticated local search strategies. The energy consumption of machines in different states and the different power rates under adjustable processing speeds will be taken into consideration. This will result in a more complicated scheduling problem with additional decision variables, which makes it necessary to devise an enhanced local search scheme to promote the search efficiency of meta-heuristic algorithms. The k-opt neighborhood operators could be combined with more complex moves to construct an adaptive large neighborhood search policy.