*Article* **Modeling and Optimization for Multi-Objective Nonidentical Parallel Machining Line Scheduling with a Jumping Process Operation Constraint**

**Guangyan Xu <sup>1</sup> , Zailin Guan <sup>1</sup> , Lei Yue 2,\*, Jabir Mumtaz <sup>3</sup> and Jun Liang <sup>1</sup>**



**Citation:** Xu, G.; Guan, Z.; Yue, L.; Mumtaz, J.; Liang, J. Modeling and Optimization for Multi-Objective Nonidentical Parallel Machining Line Scheduling with a Jumping Process Operation Constraint. *Symmetry* **2021**, *13*, 1521. https://doi.org/10.3390/ sym13081521

Academic Editors: Juan Alberto Rodríguez Velázquez and Alejandro Estrada-Moreno

Received: 13 July 2021 Accepted: 13 August 2021 Published: 18 August 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

**Abstract:** This paper investigates the nonidentical parallel production line scheduling problem derived from an axle housing machining workshop of an axle manufacturer. The characteristics of axle housing machining lines are analyzed, and a nonidentical parallel line scheduling model with a jumping process operation (NPPLS-JP), which considers mixed model production, machine eligibility constraints, and fuzzy due dates, is established so as to minimize the makespan and earliness/tardiness penalty cost. While the physical structures of the parallel lines in the NPPLS-JP model are symmetric, the production capacities and process capabilities are asymmetric for different models. Different from the general parallel line scheduling problem, NPPLS-JP allows for a job to transfer to another production line to complete the subsequent operations (i.e., jumping process operations), and the transfer is unidirectional. The significance of the NPPLS-JP model is that it meets the demands of multivariety mixed model production and makes full use of the capacities of parallel production lines. Aiming to solve the NPPLS-JP problem, we propose a hybrid algorithm named the multi-objective grey wolf optimizer based on decomposition (MOGWO/D). This new algorithm combines the GWO with the multi-objective evolutionary algorithm based on decomposition (MOEA/D) to balance the exploration and exploitation abilities of the original MOEA/D. Furthermore, coding and decoding rules are developed according to the features of the NPPLS-JP problem. To evaluate the effectiveness of the proposed MOGWO/D algorithm, a set of instances with different job scales, job types, and production scenarios is designed, and the results are compared with those of three other famous multi-objective optimization algorithms. The experimental results show that the proposed MOGWO/D algorithm exhibits superiority in most instances.

**Keywords:** nonidentical parallel production lines; axle housing machining; mixed model production; eligibility constraint; fuzzy due date; grey wolf optimizer

## **1. Introduction**

Flow shop scheduling problems are the most common and widely studied problems in the manufacturing industry, and the examined problems are usually simplified versions of the real flow shop scheduling problem so as to reduce the difficulty of modeling and solving these problems. These oversimplified schemes often cannot perfectly solve such scheduling problems in the actual environment. In real-world manufacturing situations, some special constraints or uncertainties must usually be considered and handled, such as sequence-dependent setup times [1–3], the kinds of parallel machines under study [4,5], machine eligibility constraints [6–9], resource constraints [10,11], and fuzzy stochastic

demand [12,13]. Considerations of these additional constraints and uncertainties make the developed scheduling models closer to real production scenarios, but also increase their scheduling complexity. Because of product iteration requirements and the diversified needs of customers, production systems must often address multivariety production on multiple production lines. This is very common in manufacturing enterprises such as those in the automobile industry and household appliance industry, as well as for construction machinery manufacturers. When coping with these situations, the equipment configurations of multiple lines may be different for meeting multivariety production. In this paper, we study this nonidentical parallel production line scheduling problem. To improve the machine utilization and shorten the waiting times of the jobs to be processed, a jumping process operation is often used. This means that if a certain process operation of a job is finished, it can move to another production line to complete the subsequent process operations. This jumping process operation is unidirectional. To solve this kind of scheduling problem, nonidentical parallel line scheduling with a jumping process operation (NPPLS-JP) is proposed; this is also in essence a parallel production line scheduling problem. Notably, the proposed NPPLS-JP problem is an NP-hard problem because of its complexity. systems must often address multivariety production on multiple production lines. This is very common in manufacturing enterprises such as those in the automobile industry and household appliance industry, as well as for construction machinery manufacturers. When coping with these situations, the equipment configurations of multiple lines may be different for meeting multivariety production. In this paper, we study this nonidentical parallel production line scheduling problem. To improve the machine utilization and shorten the waiting times of the jobs to be processed, a jumping process operation is often used. This means that if a certain process operation of a job is finished, it can move to another production line to complete the subsequent process operations. This jumping process operation is unidirectional. To solve this kind of scheduling problem, nonidentical parallel line scheduling with a jumping process operation (NPPLS-JP) is proposed; this is also in essence a parallel production line scheduling problem. Notably, the proposed NPPLS-JP problem is an NP-hard problem because of its complexity. This NPPLS-JP problem is derived from the axle housing machining workshop of an

parallel machines under study [4,5], machine eligibility constraints [6–9], resource constraints [10,11], and fuzzy stochastic demand [12,13]. Considerations of these additional constraints and uncertainties make the developed scheduling models closer to real production scenarios, but also increase their scheduling complexity. Because of product iteration requirements and the diversified needs of customers, production

*Symmetry* **2021**, *13*, x FOR PEER REVIEW 2 of 26

This NPPLS-JP problem is derived from the axle housing machining workshop of an axle manufacturer. Axle housing is an important part of axle production; it usually adopts make-to-stock (MTS) production and make-to-order (MTO) assembly. The machining of an axle housing is shown in Figure 1. The axle manufacturer adopts nonidentical parallel production lines for axle housing machining. There are two parallel production lines (A and B) in the axle housing machining workshop, and each production line is installed linearly, as shown in Figure 1. The physical structures of the two production lines are symmetrical. Each production line contains five stages corresponding to five operations, and the parallel machines at any stage of each line are identical. The corresponding stages of production lines A and B have similar functions, but the configurations of the machines in the different lines are different in order to meet the needs of multivariety mixed production. In this production situation, the production load of each stage on any line is not easy to balance via a simple scheduling scheme. axle manufacturer. Axle housing is an important part of axle production; it usually adopts make-to-stock (MTS) production and make-to-order (MTO) assembly. The machining of an axle housing is shown in Figure 1. The axle manufacturer adopts nonidentical parallel production lines for axle housing machining. There are two parallel production lines (A and B) in the axle housing machining workshop, and each production line is installed linearly, as shown in Figure 1. The physical structures of the two production lines are symmetrical. Each production line contains five stages corresponding to five operations, and the parallel machines at any stage of each line are identical. The corresponding stages of production lines A and B have similar functions, but the configurations of the machines in the different lines are different in order to meet the needs of multivariety mixed production. In this production situation, the production load of each stage on any line is not easy to balance via a simple scheduling scheme.

**Figure 1.** The composition of an axle housing machining line. **Figure 1.** The composition of an axle housing machining line.

According to the different vehicle models, there are eight types of axle housing products that can be processed on line A and line B with multivariety mixed model production; all types of products are processed through five operations, as shown in Figure 1. Two types of axle housing products have machine eligibility constraints, and the operation "combined machining I" can only be performed at the corresponding machines in production line B; the others can finish processing on any line independently. For different types of axle housing, the different processing times required for the same operation on the same machine and the different mixing ratios of the various axle housing types increase the complexity of the scheduling problems. It is difficult to balance all the stages of each production line, so a jumping process operation is adopted to address this problem by allowing a job to be processed on two production lines. For example, when According to the different vehicle models, there are eight types of axle housing products that can be processed on line A and line B with multivariety mixed model production; all types of products are processed through five operations, as shown in Figure 1. Two types of axle housing products have machine eligibility constraints, and the operation "combined machining I" can only be performed at the corresponding machines in production line B; the others can finish processing on any line independently. For different types of axle housing, the different processing times required for the same operation on the same machine and the different mixing ratios of the various axle housing types increase the complexity of the scheduling problems. It is difficult to balance all the stages of each production line, so a jumping process operation is adopted to address this problem by allowing a job to be processed on two production lines. For example, when the operation "combined machining I" is finished on line A, the job is transferred to line B to complete

the subsequent machining operations; this is called the jumping process operation, and the stage "combined machining I" is called the jumping process point. A jumping process operation is unidirectional, which means that the axle housings processed on line A are allowed to be transferred to production line B, but not the other way around. Appropriate jumping process operations can reduce waiting times, improve the utilization rate of equipment, balance the production capacity of each stage, and ensure the due date of each order.

From the above description of a production system, it can be seen that the axle housing machining line scheduling problem has the following characteristics: (1) The configurations of the multiple production lines are similar but not the same. (2) Mixed multivariety production is adopted to organize production. (3) Several jobs with special types have machine eligibility constraints. (4) The jumping process operation is unidirectional in the production process. This problem can be regarded as a variant and extension of the flow shop scheduling problem or general parallel production line scheduling problems, and it involves four key decision-making processes, namely: job sequencing decisions, parallel line decisions, parallel machine decisions, and job jumping process operation decisions. It is obvious that the NPPLS-JP problem proposed in this paper is a rather complex scheduling optimization problem.

Because of the fierce competition in the market and the diversified needs of customers, the NPPLS-JP problem is widespread in manufacturing environments and has an important impact on the manufacturing efficiency of production systems. However, there is no relevant research on this topic in the existing literature, so the study in this paper is of exploratory and practical significance. In this paper, we establish a scheduling model for the NPPLS-JP problem, which involves multivariety mixed model production, multiline scheduling, and machine eligibility constraints. The objectives are to minimize the makespan and earliness/tardiness penalty in a production cycle. In the NPPLS-JP model, to more closely approximate the actual production environment, the due date of a production order denoted by the fuzzy earliness/tardiness penalty model [14], and a hybrid algorithm combining the grey wolf optimization algorithm and the multi-objective evolutionary algorithm based on decomposition (MOEA/D) are proposed to solve the NPPLS-JP problem.

The remainder of this paper is organized as follows. Section 2 reviews the literature relevant to multivariety mixed model production, parallel line scheduling, and multiobjective optimization. Section 3 gives a general statement of the NPPLS-JP problem and establishes a production-based, order-oriented, multi-objective scheduling model. Section 4 proposes the multi-objective grey wolf optimizer based on decomposition (MOGWO/D) for NPPLS-JP and describes the procedures in detail. Section 5 tests the performance of the proposed MOGWO/D algorithm by comparing it with three other famous multiobjective algorithms based on a set of designed test instances, and the experimental results are analyzed. Section 6 summarizes the research content and discusses the direction of future research.

#### **2. Literature Review**

Mixed model production refers to the production of a variety of products on a single production line to increase the flexibility of the line and meet the multivariety and smallbatch production demands. For multiproduct demands, mixed model production is widely adopted by manufacturing enterprises, especially in assembly workshops [15,16]. Because of the widespread application of mixed model production, many scholars are devoted to research in this field and have made many achievements [17–21].

Mcmullen et al. [22] studied the mixed model scheduling problem with the consideration of a setup time. They presented a bean search heuristic method to obtain an efficient front. Leu and Hwang [23] proposed a resource-constrained mixed-production flow shop scheduling system for mixed precast production task problems and developed a search method based on a genetic algorithm (GA) to minimize the output makespan under resource constraints and mixed production. Wang et al. [24] studied final assembly line scheduling, which considers order scheduling and mixed model sequencing simultaneously, and combined the original artificial bee colony (ABC) algorithm with some steps of the GA and Pareto optimality to solve this problem. Bahman et al. [25] constructed a mixed-integer linear programming model with a tighter linear relaxation for a realistic automotive industry assembly line, including a set of specific requirements involving moving workers and limited workspace. Alghazi and Kurz [26] proposed a mixed model line-balancing integer program for mixed model assembly lines with the aim of minimizing the number of chemical workers; a constraint programming model was established to address larger assembly line balancing problems.

Parallel line scheduling problems are very common in both mass production and multiproduct production. Parallel line production can enhance the stability and flexibility of the production system and improve production efficiency. When one production line breaks down, not all production activities are stopped. All parallel lines may have the same number of processing stages, the pieces of equipment in the same stage are similar and can complete the same production processes, and every line can substitute for all other lines to produce all or some types of the desired products [27]. However, in some situations, the configurations of the machines in different lines are nonidentical; this situation is more convenient for the production of multiple products. For example, two types of equipment in different production lines can independently complete the operation of milling surfaces, but the machining accuracies are different.

Haq et al. [28] studied the line scheduling problem with multiple parallel processing in job shops. Each job can only be processed on a particular line and is not allowed to move between parallel lines. Meyr and Mann et al. [29] introduced a new solution approach to determine the lot-sizing and scheduling problem for parallel production lines with the consideration of scarce capacity; sequence-dependent setup times; and the deterministic, dynamic demands of multiple products. Mumtaz et al. [30] investigated the multiple assembly line scheduling problem for a printed circuit board (PCB) assembly and developed a hybrid spider monkey optimization approach with an improved replacement strategy to solve it. Rajeswari et al. [27] presented parallel flow line scheduling with a dual objective to minimize the tardiness and earliness of jobs. All parallel flow lines had similar sets, and the authors developed a hybrid algorithm that used a GA and particle swarm optimization (PSO) to incorporate greedy randomized adaptive search to address the problem. Ebrahimipour et al. [31] proposed linear programming and a bagged binary knapsack to address the multiple production line scheduling problem. Mumtaz et al. [32] developed a mixed-integer programming model for the multilevel planning and scheduling problem of parallel PCB assembly lines, and a hybrid spider monkey optimization (HSMO) algorithm was proposed.

Multi-objective optimization refers to a situation where more than one conflicting objective is to be optimized simultaneously. It is often impossible to obtain an optimal solution as in a single-objective optimization problem, but a set of tradeoff solutions, namely, nondominated solutions, can be used to choose the most suitable solution according to the actual requirements. Therefore, it is more applicable to the actual situation, which requires the consideration of multiple indicators affecting decision making. As it is conducive to obtaining an ideal decision making effect, multi-objective optimization has a wider application field and more practical value. MOEAs are global optimization algorithms based on populations that simulate the evolution process of natural organisms. Since the whole solution set can be obtained in one run, MOEAs have become some of the mainstream algorithms in multi-objective optimization. According to their selection mechanisms, MOEAs can be classified into three classes [33]: Pareto-based algorithms, indicator-based algorithms, and decomposition-based algorithms. The Pareto-based method was first proposed by Goldberg et al. [34] and has been widely studied since then. Many classical multi-objective optimization algorithms are based on the Pareto relation, and many have been proposed based on the Pareto dominance relationship, such as the famous

SPEA [35], SPEA2 [36], and NSGA-II [37]. The indicator-based method uses performance evaluation indicators to guide the search process and the choice of solutions [38,39]. The MOEA/D was first proposed by Zhang and Hui in 2007 [40], and it is the most representative multi-objective optimization algorithm based on decomposition. Different from the classical multi-objective optimization algorithms, it decomposes an input multi-objective optimization problem (MOP) into a series of single-objective optimization subproblems by using a set of uniformly distributed weight vectors and optimizing these subproblems simultaneously. Since the MOEA/D was proposed, it has attracted increasing attention from scholars, improvements and applications for the MOEA/D are constantly emerging, and it has become one of the best multi-objective optimization algorithms.

Li and Landa-Silva et al. [41] proposed evolutionary multi-objective simulated annealing (EMOSA), which incorporates a simulated annealing algorithm and introduces an adaptive search strategy. The experimental results showed that the algorithm obtained a good effect in terms of solving the multi-objective knapsack problem and multi-objective salesman problem. Tan et al. [42] developed a multi-objective meme algorithm based on decomposition, which integrates a simplified quadratic approximation (SQA) into the MOEA/D as a local search operator to balance its local and global search strategies. Wang et al. [43] designed a multi-objective particle swarm optimization algorithm based on decomposition (MPSO/D). This algorithm adopts relevant measures to ensure that only one solution is present in each subregion in oder to maintain solution diversity, and the fitness is calculated by the crowding distance. Ke and Zhang et al. [44] proposed the MOEA/D-ACO algorithm, which incorporates ant colony optimization (ACO) into the MOEA/D; they then tested the performance of the proposed algorithm in 12 instances and obtained good results. Alhindi et al. [45] developed a hybrid algorithm called MOEA/D-GLS, which integrated guided local search (GLS) with the MOEA/D to promote the exploitation ability of the original MOEA/D. The experimental results showed that the proposed MOEA/D-GLS was superior to the original MOEA/D. Zhang et al. [46] proposed MOEA/D-EGO to address expensive MOPs. In this method, the input problem is decomposed into several subproblems, and a prediction model is established for each subproblem based on the evaluated points in order to reduce modeling costs and improve the prediction quality. Wang et al. [47] proposed adaptive replacement strategies by adjusting the problem size dynamically for the MOEA/D. This approach can balance the diversity and convergence of the MOEA/D.

However, according to the no free lunch theorem [48], no algorithm can solve all of the optimization problems in all of the fields. Because of the continuous emergence of new optimization problems, the existing algorithms cannot solve these new optimization problems well, so new algorithms or improved algorithms are needed. The MOEA/D algorithm exhibits good diversity in solving MOPs; its characteristics include simplicity, few parameters, and better result distributions. In this paper, the MOGWO/D, which incorporates the GWO into the MOEA/D, is proposed to solve the NPPLS-JP problem.

#### **3. Problem Description and Mathematical Modeling**

#### *3.1. Problem Definition and Assumption*

Suppose that *O* orders are processed in *L* production lines, the job types for each order are the same and those for different orders may be different, and the operations of each type of job are predetermined and similar. All production lines have the same number of stages, and the machine configurations may be different. If parallel machines exist in some stages of the production line, they are identical parallel machines. Some types of jobs may have machine eligibility constraints, namely some job operations must be carried out on specific machines at certain stages of some production lines. For any production line *l*, if *s* is set as the jumping process operation point, it means that after processing is completed in stage *s*, the job can be transferred to line *l* 0 to continue the processing of the subsequent operations (*l* 6= *l* 0 *and l*, *l* <sup>0</sup> ∈ {1, 2, · · · , *L*}); the jumping process operation is unidirectional. The scheduling objectives are to minimize the makespan and the earliness/tardiness penalty cost.

In addition, there are usually several complicated constraints and perturbations in the real-world production environment. To prevent the loss of generality and reduce the computational complexity of the scheduling model, some modeling assumptions are given, as follows.


Meanwhile, to solve the NPPLS-JP problem more conveniently, the concept of a virtual production line is introduced. This means that if there is a jumping process operation point in the manufacturing process of a job of a certain type, all the stages in two different production lines that can complete the processing of jobs of this type are regarded as a new production line, namely, a virtual production line.

In a real-world production environment, a breach of the order due dates is not always unacceptable. In general, a few occurrences of tardiness are allowed, to achieve the smallest due date penalty cost across the total orders. Here, the fuzzy due date is used to deal with this situation. Trapezoidal fuzzy due date and triangular fuzzy due date are two common fuzzy due dates that have been investigated in the literature regarding scheduling problems [14,49–51]. Most researchers choose the type of fuzzy due date depending on the research background and problem characteristics. In our research, neither early nor late completion were the best solutions for the automobile industries. Completing the production order in advance will increase the inventory cost, while the order delay will lead to customer penalty loss. Finishing and delivering the orders in a given period is the most feasible result. Therefore, the trapezoidal fuzzy due date and earliness/tardiness penalty cost model was adopted in this scheduling problem according to the real-world production requirement.

As shown in Figure 2a–b, (a) is the trapezoidal fuzzy due date and earliness/tardiness penalty cost model, (b) is the corresponding satisfaction model of the fuzzy due date. Model (a) shows each order has a corresponding trapezoidal fuzzy due date that is denoted by a trapezoidal fuzzy number *d<sup>o</sup>* = *d* 1 *o* , *d* 2 *o* , *d* 3 *o* , *d* 4 *o* , and the earliness/tardiness penalty cost coefficients are *α<sup>o</sup>* and *βo*, respectively. A completion time before *d* 1 *o* it means that the orders are produced prematurely, and additional inventory costs are generated; if the completion time comes after *d* 4 *o* , the production order seriously violates the due date requirement. These two cases are both unacceptable, so the satisfaction is 0, and the maximal earliness/tardiness penalty is imposed. If the completion time is in the time interval [*d* 1 *o* , *d* 2 *o* ] or [*d* 3 *o* , *d* 4 *o* ], the due date penalty costs decrease and increase linearly, respectively, and the corresponding due date satisfaction is just the opposite. Only in the time interval [*d* 2 *o* , *d* 3 *o* ] are the completion times reasonable, and the due date penalty cost in this case is 0. Therefore, the production orders should be arranged optimally in terms of time according to different due dates and earliness/tardiness penalty costs.

**Figure 2.** The earliness/tardiness penalty cost and satisfaction model. (**a**) The earliness/tardiness penalty cost; (**b**) The satisfaction model. **Figure 2.** The earliness/tardiness penalty cost and satisfaction model. (**a**) The earliness/tardiness penalty cost; (**b**) The satisfaction model.

#### *3.2. Mathematical Modeling 3.2. Mathematical Modeling*

A mathematical model for the proposed NPPLS-JP problem is established using the above notations, and the two objective functions, which minimize the makespan and earliness/tardiness penalty cost, are formulated as Equations (1) and (2), respectively. *Min C f*<sup>1</sup> = max { *<sup>o</sup>* } (1) A mathematical model for the proposed NPPLS-JP problem is established using the above notations, and the two objective functions, which minimize the makespan and earliness/tardiness penalty cost, are formulated as Equations (1) and (2), respectively.

$$\text{Min} \quad f\_1 = \max\{\mathbb{C}\_o\} \tag{1}$$

$$\text{Min} \quad f\_2 = \sum\_{o=1}^{O} P\_o \tag{2}$$

for order *o* are formulated as Equations (3) and (4), respectively: *C C <sup>o</sup>* = ∈ max {*u iJ <sup>i</sup>*,*o i*} (3) where the calculation formulas of the completion time and due earliness/tardiness penalty for order *o* are formulated as Equations (3) and (4), respectively:

$$\mathcal{C}\_o = \max \{ \mu\_{i,o} \mathcal{C}\_i \} \; i \in J \tag{3}$$

$$P\_o = \begin{cases} \alpha\_o & \mathcal{C}\_o < d\_o^1 \\ \alpha\_o \cdot \frac{d\_o^2 - \mathcal{C}\_o}{d\_o^2 - d\_o^1} & d\_o^1 \le \mathcal{C}\_o < d\_o^2 \\ 0 & d\_o^2 \le \mathcal{C}\_o \le d\_o^3 \quad o \in O \\ \beta\_o \cdot \frac{\mathcal{C}\_o - d\_o^3}{d\_o^4 - d\_o^3} & d\_o^3 \le \mathcal{C}\_o < d\_o^4 \\ \beta\_o & \mathcal{C}\_o \ge d\_o^4 \end{cases} \tag{4}$$

1

*o oo* β <sup>≥</sup> The constraints are as follows:

$$\sum\_{l=1}^{L} \sum\_{s=1}^{(s\_l - 1)} \left( \mathbf{X}\_{l,s,l} \mathbf{Y}\_{i,s,s'} \right) = \mathbf{1} \; i \in I; \; s' = (s+1) \tag{5}$$

4

*C d*

$$\sum\_{l=1}^{L} \sum\_{s=(s\_l+1)}^{(S-1)} \left( X\_{i,s,l} Y\_{i,s,s'} \right) = 1 \; i \in \mathcal{J}; \; s' = (s+1) \tag{6}$$

$$Y\_{i,s\_l,(s\_l+1)} = 0, 1 \; i \in J \tag{7}$$

$$\sum\_{l=1}^{L} \left( \mathbf{X}\_{i,s,l} \mathbf{X}\_{i,s',l} \right) = \mathbf{X}\_{i,s,s'} \text{ i} \in \mathfrak{f}; \, s \in \{1, 2, \dots, \iota(\mathcal{S}-1)\}; \, s' = (s+1) \tag{8}$$

$$\sum\_{k=1}^{M\_{sl}} X\_{k,i,s,l} = X\_{i,s,l} \; i \in \mathcal{J}; \; s \in \mathcal{S}; \; l \in L \tag{9}$$

$$X\_{i,s,l} \leq \sum\_{o=1}^{O} \sum\_{t=1}^{N\_l} \sum\_{l=1}^{L} \mathbf{x}\_{i,o} \mathbf{x}\_{o,t} \mathbf{x}\_{t,s,l} \text{ } i \in \mathcal{J}; \; s \in \mathcal{S}; \; l \in L \tag{10}$$

$$\sum\_{l=1}^{L} \sum\_{k=1}^{M\_{sl}} X\_{k,i,s,l} = 1 \; i \in \mathcal{J}; \; s \in \mathcal{S} \tag{11}$$

$$\mathcal{Z}\_{k,i;l,s,l} + \mathcal{Z}\_{k,i',s,l} \le \mathcal{X}\_{k,s,l,l} \text{ i,i'} \in \mathcal{J}; \text{ i } \neq \text{i'}; \, s \in \mathcal{S}; \, k \in \mathcal{M}\_{s,l}; \, l \in L \tag{12}$$

$$\mathbf{Z}\_{\mathbf{k},\mathbf{i},\mathbf{i}',\mathbf{s},\mathbf{l}} + \mathbf{Z}\_{\mathbf{k},\mathbf{i}',\mathbf{i},\mathbf{s},\mathbf{l}} \le \mathbf{X}\_{\mathbf{k},\mathbf{s},\mathbf{i}',\mathbf{l}} \text{ i}, \mathbf{i}' \in \mathbf{J}; \text{ i} \ne \mathbf{i}'; \mathbf{s} \in \mathbf{S}; \text{ k} \in \mathbf{M}\_{\mathbf{s},\mathbf{l}}; \text{ l} \in \mathbf{L} \tag{13}$$

$$\begin{aligned} \mathbf{C}\_{\mathbf{k},\mathbf{s},\mathbf{i}',\mathbf{l}'}\mathbf{X}\_{\mathbf{k},\mathbf{s},\mathbf{i}',\mathbf{l}'} + M(1-\mathbf{Z}\_{\mathbf{k},\mathbf{i},\mathbf{i}',\mathbf{s},\mathbf{l}}) &\geq \mathbf{C}\_{\mathbf{k},\mathbf{s},\mathbf{i},\mathbf{l}}\mathbf{X}\_{\mathbf{k},\mathbf{s},\mathbf{i},\mathbf{l}} + \sum\_{o=1}^{O} \sum\_{t=1}^{N\_{\mathbf{t}}} (pt\_{t,\mathbf{k},\mathbf{s}}\mathbf{x}\_{o,t}\mathbf{x}\_{i,o})\mathbf{X}\_{\mathbf{k},\mathbf{s},\mathbf{i}',\mathbf{l}'} \\\ i,i' \in \mathbf{J};\ i \neq i';\mathbf{s} \in \mathbf{S};\ k \in \{M\_{\mathbf{s},\mathbf{l}} \cup M\_{\mathbf{s},\mathbf{l}'}\}; l, l' \in L \end{aligned} \tag{14}$$

$$\begin{aligned} \mathbf{C}\_{\mathbf{k};s;j,l'}X\_{\mathbf{k};s;j,l'} + M(\mathbf{1} - X\_{\mathbf{k};s;j,l'}) &\geq \mathbf{C}\_{\mathbf{k};s;j,l'}X\_{\mathbf{k};s;j,l} + \sum\_{o=1}^{O} \sum\_{t=1}^{N\_l} \left(pt\_{t,k',s'} \mathbf{x}\_{o,t} \mathbf{x}\_{i,o} \right) \mathbf{X}\_{k',s';j,l'} \\ \mathbf{s} \in \{1,2,\cdots,S-1\}; \mathbf{s'} = (\mathbf{s}+1); \; k \in M\_{\mathbf{s},l}; \; k' \in M\_{\mathbf{s'},l'}; \; l, l' \in L \end{aligned} \tag{15}$$

Among the above constraints, constraints (5)–(7) together define the jumping process operation. Constraint (5) defines the operations before the jumping process operation point (including the operation on the jumping process operation stage), which can only be completed in one production line. Constraint (6) restricts all of the operations after the jumping process operation point for each job that can only be processed on the same production line. Constraint (7) states the jumping process operation for any job may or may not occur after completing another jumping process operation, and the three constraints together guarantee that the jumping process operation can only occur once at most. Constraint sets (8) and (9) define the relationships between several decision variables. Constraint (10) states that the processing of each job operation must satisfy the machine eligibility constraints. Constraint (11) ensures that each operation of a job can only be processed on one machine. Constraint sets (12) and (13) together restrict the processing sequence of two jobs on a machine to only one possible result. Constraint (14) guarantees that one machine can only process one job at a time, which means that the completion time of the current job is longer than the sum of the completion time of the immediate predecessor job and the processing time of the current job. Constraint (15) ensures that a job is processed by only one machine at a time, that is, the completion time of the job operation is greater than the sum of the completion time of the immediate predecessor operation and the processing time of the current operation.

#### **4. Proposed MOGWO/D Algorithm**

The MOEA/D provides a general framework that allows any single objective to be applied to the subproblems of a MOP [41]. Compared with the other multi-objective optimization algorithms, such as the Pareto-based optimization algorithms, MOEA/D has less computational complexity, and its results have better diversity. In this section, we present a hybrid algorithm that combines GWO with MOEA/D, and uses the mechanism of searching for prey in the GWO algorithm to enforce a balance between exploration and exploitation. According to the characteristics of the NPPLS-JP problem, problemspecific encoding and decoding rules are given, and some main procedures in the proposed MOGWO/D are also stated in detail.

#### *4.1. Original GWO*

The GWO was inspired by the leadership hierarchy and hunting behaviors of grey wolves [52]. In GWO, initial populations are used to simulate the grey wolf group, which is divided into four hierarchies; the solutions with the best, second, and third fitness values are α, β, and δ, respectively, are utilized to find the optimal solution by simulating the hunting process of grey wolves. In the process of hunting, the location of the prey is unknown. Therefore, to simulate the hunting behavior of grey wolves and the prey behavior from the perspective of mathematical modeling, suppose that α, β, and δ are closest to the potential position of the prey. Under the guidance of α, β, and δ, the position vector is updated to approximate the optimal solutions in the search space.

The main procedure of wolf hunting includes encircling prey and hunting, and the mathematical models for grey wolves approaching and encircling their prey are as follows:

$$
\stackrel{\rightarrow}{D} = \left| \stackrel{\rightarrow}{\dot{C}} \cdot \stackrel{\rightarrow}{X}\_p(t) - \stackrel{\rightarrow}{\dot{X}}(t) \right| \tag{16}
$$

$$
\stackrel{\rightarrow}{X}(t+1) = \stackrel{\rightarrow}{X\_p}(t) - \stackrel{\rightarrow}{A} \cdot \stackrel{\rightarrow}{D} \tag{17}
$$

$$
\stackrel{\rightarrow}{A} = 2\stackrel{\rightarrow}{a} \cdot \stackrel{\rightarrow}{r\_1} - \stackrel{\rightarrow}{a} \tag{18}
$$

$$
\stackrel{\rightarrow}{\mathcal{C}} = \mathcal{D}\_2^{\rightarrow} \tag{19}
$$

In Equations (16) and (17), → *X<sup>p</sup>* indicates the position vector of the prey and → *X* is the position vector of the grey wolf. → *A* and → *C* are coefficient vectors, and the calculation methods are shown in Equations (18) and (19). By changing the value of the vector → *A*, the search process can be guided. When | → *A*| > 1, α, β, and δ diverge from each other, which is good for global search; when | → *A*| < 1, α, β, and δ converge to the prey, which contributes to the local search. The parameter → *C* is generated randomly to help grey wolves jump out of the local optima.

In Equations (18) and (19), <sup>→</sup> *r*<sup>1</sup> and <sup>→</sup> *r*<sup>2</sup> are randomly generated in [0, 1], and the values of the parameter <sup>→</sup> *a* linearly decrease from 2 to 0 over the course of the iterations. During the process of hunting, the position vectors are updated using the following equations:

$$
\stackrel{\rightarrow}{X}\_1 = \stackrel{\rightarrow}{X}\_{\alpha} - \stackrel{\rightarrow}{A}\_1 \cdot \left| \stackrel{\rightarrow}{\mathcal{C}}\_1 \cdot \stackrel{\rightarrow}{X}\_{\alpha} - \stackrel{\rightarrow}{X} \right| \tag{20}
$$

$$
\stackrel{\rightarrow}{X}\_2^\rightarrow = \stackrel{\rightarrow}{X}\_\beta - \stackrel{\rightarrow}{A}\_2 \cdot \left| \stackrel{\rightarrow}{\mathbf{C}}\_2 \cdot \stackrel{\rightarrow}{X}\_\beta - \stackrel{\rightarrow}{X} \right|\tag{21}
$$

$$
\stackrel{\rightarrow}{X}\_3 = \stackrel{\rightarrow}{X}\_\delta - \stackrel{\rightarrow}{A}\_3 \cdot \left| \stackrel{\rightarrow}{\mathcal{C}}\_3 \cdot \stackrel{\rightarrow}{X}\_\delta - \stackrel{\rightarrow}{X} \right|\tag{22}
$$

$$
\stackrel{\rightarrow}{X}(t+1) = \frac{\stackrel{\rightarrow}{X}\_1 + \stackrel{\rightarrow}{X}\_2 + \stackrel{\rightarrow}{X}\_3}{3} \tag{23}
$$

In Equations (20)–(23), → *Xα*, → *Xβ*, and → *X<sup>δ</sup>* are the position vectors of α, β, and δ, respectively, and → *X* denotes the current position vector that needs to be updated.

#### *4.2. MOGWO/D Algorithm Framework*

The proposed MOGWO/D is a hybrid algorithm that integrates GWO into MOEA/D. Similar to the original MOEA/D, the MOGWO/D algorithm decomposes the input multiobjective problem into a series of single-objective scalar optimization subproblems by utilizing a set of uniformly distributed weight vectors and a scalar function. Here, we use the Tchebycheff method to construct each subproblem; then, subproblem *i* can be described as follows [40]:

$$\text{Minimize } \mathbf{g}^{t\varepsilon}(\mathbf{x}|\lambda\_i, \mathbf{z}^\*) = \max\_{1 \le j \le m} \left\{ \lambda\_i^j |f\_i(\mathbf{x}) - \mathbf{z}\_m^\*| \right\} \tag{24}$$

where *z* ∗ *i* is the *i*-th component of reference point *z* ∗ 1 , *z* ∗ 2 , · · · , *z* ∗ *m T* , *z* ∗ *<sup>i</sup>* = min{ *fi*(*x*)|*x* ∈ Ω}, *i* = 1, 2, · · · , *m*, *λ<sup>i</sup>* = *λ* 1 *i* , *λ* 2 *i* , · · · , *λ m i T* . The purpose is to minimize each singleobjective function *g te*(*x<sup>i</sup>* |*λi* , *z* ∗ ), and each subproblem uses the approach of the GWO to update its position vectors. It is worth noting that finding accurate reference points is difficult and time-consuming work, so the best objective values *z* = (*z*1, *z*2, · · · , *z*m) *T* method is used in the initial population as the initial reference point, and to update the reference point over the course of iterations by generations.

As the normalization method for objectives is conducive to increase the uniformness of the obtained solutions when the input objectives are disparately scaled [40], we used a simple normalization method to replace *f<sup>i</sup>* and obtained a normalization Tchebycheff approach, as follows.

$$\text{Minimize } \mathcal{g}^{te}(\mathbf{x}|\lambda\_i, \mathbf{z}^\*) = \max\_{1 \le j \le m} \left\{ \lambda\_i^j \left| \frac{f\_i(\mathbf{x}) - z\_i^\*}{z\_i^{nad} - z\_i^\*} \right| \right\} \tag{25}$$

where *z* ∗ is the reference point and *z nad* = n *z nad* 1 , *z nad* 2 , · · · , *z nad m* o is the nadir point in the objective space. In our calculation, *z* ∗ is replaced by *z* in Step 2.3 of Algorithm 1, and the maximum value of *fi*(*x*) in the current population is the substitute for *z nad i* . This calculation strategy can meet the needs of the algorithm.

**Algorithm 1** MOGWO/D

**Input:**

A multiobjective problem;

A stopping criterion;

A set of uniformly spread weight vectors *λ* 1 , *λ* 2 , · · · , *λ N* ;

*N* : population size (equal to the number of the weight vectors or subproblems);

*T* : neighborhood size;

*T* 0 : the number of position vectors in the neighborhood to be updated of a subproblem (where *T* 0 < *T*).

**Output:**

External population, EP for short.

**Step 1) Initialization:**

**Step 1.1)** Set EP = ∅;

**Step 1.2)** Generate a set of uniformly distributed weight vectors *λ* 1 , *λ* 2 , · · · , *λ N* , calculate the Euclidean distances of any pair of weight vectors, for

∀*i* = 1, 2, · · · , *N*, defines a set B(*i*) = {*i*<sup>1</sup> , *i*2, · · · , *iT*}, *λ i*1 , *λ i*2 , · · · , *λ <sup>i</sup><sup>T</sup>* are *T* closest weight vectors of the weight vecto *λ i* .

**Step 1.3)** Randomly generate an initial population *x* 1 , *x* 2 , · · · , *x N* or use a problem-specific approach. The objective of each position vector is calculated and labeled as

*FV<sup>i</sup>* , *FV<sup>i</sup>* = *F x i* , *i* = 1, 2, · · · , *N*.

**Step 1.4)** Initialize

z = (*z*<sup>1</sup> , *z*2, · · · , *zm*) *T* , *z<sup>i</sup>* = *min f i x* 1 , *f i x* 2 , · · · , *f i x N* , *<sup>i</sup>* <sup>=</sup> 1, 2, · · · *<sup>m</sup>*. **Step 1.5)** Calculate *g te x j λ i* , *z* ∗ for each *j* ∈ B(*i*), and the three best position vectors are

 labeled as *xα*, *x<sup>β</sup>* and *x<sup>δ</sup>* , respectively corresponding to weight vector *λ i* .

**Step 2) Update:**

for *i* = 1, 2, · · · ,*N*, do

**Step 2.1)** Randomly select *T* 0 indexes *k*<sup>1</sup> , *k*2, · · · , *kT*<sup>0</sup> from B(*i*), then yield a set of new position vectors *xk*<sup>1</sup> , *xk*<sup>2</sup> , · · · , *xkT*<sup>0</sup> according to the Equations (20)–(23) by the guidance of

*x i αx i β* , and *x i δ* , set *PS*(*i*) = n *xk*1 , *xk*<sup>2</sup> , · · · , *xkT*<sup>0</sup> o .

**Step 2.2)** Update of *x i α* , *x i β* and *x i δ* . Comparing the value of *g te x* 0 *λ i* , *z* with

*g te x i α λ i* , *z* , *g te x i β λ i* , *z* and *g te x i δ λ i* , *z* , *x* <sup>0</sup> ∈ *PS*(*i*), then update *x i α* , *x i β* and *x i <sup>δ</sup>* with the three best position vectors of all.

**Step 2.3)** Update of *z*. For each *j* = 1, 2, · · · , *m*, if *f j x i α* < *z<sup>j</sup>* , set *z<sup>j</sup>* = *f j x i α* . **Step 2.4)** Update of neighborhood. For each

*<sup>j</sup>* <sup>∈</sup> <sup>B</sup>(*i*), if *x i α*, *λ j* , *z* o <sup>≤</sup> *<sup>g</sup> te x j λ j* , *z* , set *x <sup>j</sup>* = *x i <sup>α</sup>*, and update of *FV<sup>j</sup>* = *f x i α* . 

**Step 2.5)** Update of EP. Add *f x i α* to EP if no vectors in EP dominate *f x i α* ; if the number of vectors exceeds the EP capacity, the *k*th nearest neighbor method is used as a truncation strategy. If the vectors in EP are dominated by *f x i α* , remove from EP.

**Step 3) Stopping criterion:**

If the stopping criteria is satisfied, stop running and output EP. Otherwise, return to Step 2.

Similar to the original MOEA/D, the MOGWO/D algorithm (Algorithm 1) also optimizes a number of scalar optimization subproblems simultaneously in one iteration, thus improving the optimization efficiency of the proposed algorithm.

#### *4.3. Generate a Set of Uniform Weight Vectors*

In Step 1.2 of Algorithm 1, a simplex-lattice design [53] is adopted to generate a set of uniformly distributed weight vectors λ *<sup>i</sup>* = *λ i* 1 , *λ i* 2 , · · · , *λ i m* , *i* ∈ *N*, and *m* is the dimensionality of the objective space. For each λ *i* , *m* ∑ *j*=1 *λ i j* =1, and *λ i j* ∈ n 0, <sup>1</sup> *H* , 2 *H* , · · · , *H H* o , *H* is a predetermined positive integer determined according to the sizes of the problems, so, a total of *C m*−1 *<sup>H</sup>*+*m*−<sup>1</sup> weight vectors are obtained. For each *<sup>λ</sup> i* , the Euclidean distance to any weight vector is calculated, defining a set *B*(*i*) = {*i*1, *i*2, · · · , *iT*}, in which *λ <sup>i</sup>*<sup>1</sup> , *λ <sup>i</sup>*<sup>2</sup> , · · · , *<sup>λ</sup> iT* are the indexes of the *T* closest weight vectors to *λ i* ; then, *B*(*i*) is called neighborhood of *λ i* (including *λ i* itself, as *λ i* is the closest weight vector to itself, of which the Euclidean distance is 0). At the same time, the response of *x i* to *λ <sup>i</sup>* also generates a neighborhood, and each individual in the neighborhood corresponds to each weight vector determined by B(*i*).

#### *4.4. Encoding and Decoding*

In the proposed MOGWO/D algorithm, the encoding method is similar to the original GWO, and the initial population is randomly generated from a uniform distribution. All position vectors in the initial population are continuous, but the scheduling solutions to the proposed combinatorial optimization problem are not, so a decoding approach is needed to convert the continuous position vectors to the scheduling solutions.

In the proposed NPPLS-JP problem, each position vector needs to include two pieces of information, a job permutation and a production line sequence, and the production line sequence corresponds to the job permutation. Suppose that there are *O* orders in a planning cycle; if the number of jobs in the order *o* is *No*, then there are a total of *N* = ∑ *O <sup>o</sup>*=<sup>1</sup> *N<sup>o</sup>* jobs in this planning cycle, and each position vector in the population is represented as *x <sup>i</sup>* = [*x i* 1 , *x i* 2 , · · · , *x i N* , |*x i* (*N*+1) , · · · , *x i* 2*N* ]. For convenience of expression, the first *N* position values are marked as Part 1, which corresponds to the job permutation, and the last *N* position values are marked as Part 2, which corresponds to the production line sequence.

Part 1 and Part 2 are decoded independently. The decoding methods for Part 1 and Part 2 are different because the machining operations of the jobs of some types have machine eligibility constraints. The selection of the production line involves the decoded information of Part 1, that is, the decoding process of Part 2 depends on the obtained jobs' permutation. To discretize the continuous position vectors, the ranked-order value (ROV) rule [54] is used in the decoding processes of Part 1 and Part 2.

For each position value in Part 1 of the position vector, the ROV rule is used to generate ROVs according to the position values in ascending order. If identical position values exist, the ROVs increase from left to right, and then the ROV permutation is obtained. Then, the *N*<sup>1</sup> smallest values are picked and all are assigned a value of *V*1. *V*<sup>1</sup> is the order number of order 1, and *N*<sup>1</sup> is the size of order 1; similarly, ROVs (*N*<sup>1</sup> + 1) to *N*<sup>2</sup> are picked and assigned *V*2. *V*<sup>2</sup> is the order number of order 2, and *N*<sup>2</sup> is the size of order 2. In the same way, the job permutation is obtained.

For Part 2, first, as in Part 1, the ROV sequence is obtained according to the position values in Part 2 in ascending order. The next step is different from Part 1. For each ROV in Part 2, the same ROV in Part 1 is found, the corresponding order number is obtained, and then its job type is determined. Then, the job type in the first column of the given line selection information table is found, and the corresponding number of optional lines in the second column is obtained. Next, the remainder of the current ROV divided by the number of available lines is obtained. Finally, the corresponding production line number in the third column is found according to the calculated remainder, and the production line number is assigned to the position in Part 2 corresponding to current ROV. In the same

way, the production line sequence is obtained. This decoding method for Part 2 can prevent increases in the calculation cost due to invalid solutions caused by the selection of unusable production lines. A simple example, as shown in Table 1, is used to demonstrate the decoding rules.

*Symmetry* **2021**, *13*, x FOR PEER REVIEW 13 of 26

A simple example, as shown in Table 1, is used to demonstrate the decoding rules. The sizes of the three orders are two, three, and one, and the total number of jobs is 6six. The detailed decoding process for the example is shown in Figure 3. The sizes of the three orders are two, three, and one, and the total number of jobs is 6six. The detailed decoding process for the example is shown in Figure 3.



**Figure 3.** An example of decoding. **Figure 3.** An example of decoding.

In this example, there are a total of five jobs. Each position value of the position vector is taken from the uniform distribution U[−6, 6] so that the position vector = [3.8, −2.4, 2.3, −3.3, −1.9, −0.6, −3.2, −0.4, 3.8, −3.1, 0.7, 4.6] can be obtained, where Part 1 is [3.8, −2.4, 2.3, −3.3, −1.9, −0.6], and Part 2 is [−3.2, −0.4, 3.8, −3.1, 0.7, 4.6]. For the decoding process, In this example, there are a total of five jobs. Each position value of the position vector is taken from the uniform distribution U[−6, 6] so that the position vector *x* = [3.8, −2.4, 2.3, −3.3, −1.9, −0.6, −3.2, −0.4, 3.8, −3.1, 0.7, 4.6] can be obtained, where Part 1 is [3.8, −2.4, 2.3, −3.3, −1.9, −0.6], and Part 2 is [−3.2, −0.4, 3.8, −3.1, 0.7, 4.6]. For the decoding process, the production line selection information used in Part 2 is given in Table 2.

the production line selection information used in Part 2 is given in Table 2. **Table 2.** The line selection information of the example.


2 3 1 2 3

Each selected position vector of each subproblem is updated, and this information is

. Then, ఈ, ஒ, and ஔ are updated with the best, second best, and

used in its neighborhood. First, position vectors are updated according to Equations (20)– (23) through the guidance of the current three best position vectors ఈ, ఉ, and ఋ, and the position vector set = ൛ଵ,ଶ,⋯,்ᇲ ൟ of the subproblem is obtained. Second, calculating the ௧ value of every position vector in the () = ∪ ൛ఈ, ఉ, ఋൟ

is the optimal solution to the current subproblem. After that, the reference point is

position vectors are

, where ఈ

#### 3 3 1 2 3 *4.5. Updating the Position Vectors*

generation, <sup>ᇱ</sup>

corresponding to

*4.5. Updating the Position Vectors*  The Tchebycheff approach is adopted to decompose the input MOP into singleobjective optimization subproblems and to optimize them simultaneously. The neighborhood of each subproblem is defined based on the distances between their weight vectors. Adjacent subproblems have similar approximate solutions, so each subproblem is optimized, and only the information of neighboring subproblems is used. For every The Tchebycheff approach is adopted to decompose the input MOP into *N* singleobjective optimization subproblems and to optimize them simultaneously. The neighborhood of each subproblem is defined based on the distances between their weight vectors. Adjacent subproblems have similar approximate solutions, so each subproblem is optimized, and only the information of neighboring subproblems is used. For every generation, *T* 0 position vectors corresponding to the subproblems are updated, where *T* 0 is a positive integer smaller than the neighborhood size, and these *T* 0 position vectors are randomly selected from the neighborhood corresponding to the subproblem.

randomly selected from the neighborhood corresponding to the subproblem.

third best position vectors in the *GS*(*i*) corresponding to the weight vector

Each selected position vector of each subproblem is updated, and this information is used in its neighborhood. First, position vectors are updated according to Equations (20)– (23) through the guidance of the current three best position vectors *xα*, *xβ*, and *x<sup>δ</sup>* , and the position vector set *PS* = {*x*1,*x*2, · · · , *xT*0} of the subproblem is obtained. Second, calculating the *g te* value of every position vector in the *GS*(*i*) <sup>=</sup> *PS* <sup>∪</sup> *xα*, *xβ*, *x<sup>δ</sup>* corresponding to *λ i* . Then, *xα*, *x*β, and *x*<sup>δ</sup> are updated with the best, second best, and third best position vectors in the *GS*(*i*) corresponding to the weight vector *λ i* , where *x<sup>α</sup>* is the optimal solution to the current subproblem. After that, the reference point is updated by calculating the objectives with *xα*; if *fj*(*xα*) < *z<sup>j</sup>* , then *z<sup>j</sup>* = *fj*(*xα*). Third, the neighborhood of the current subproblem is updated with respect to each position vector to the weight vectors of the neighborhood; for each *j* ∈ B(*i*), if *g te xα λ i* , *z* ∗ ≤ *g te xj λ j* , *z* ∗ , *x<sup>j</sup>* = *x<sup>α</sup>* and *FV<sup>j</sup>* = *f*(*xα*) simultaneously. The pseudocode for implementing the update process in one iteration is shown in Algorithm 2.

**Algorithm 2** The update process of one iteration

```
While(stopping condition is not satisfied){
 //Main loop
 for(i = 1; i ≤ N; i++)
 {//optimize N subproblems simultaneously
 idxes = getRandoms(T
                       0
                        , B(i));
 selectedPop = getIndividuals(neighborhood, idxes);
 PS(i)= updateIndividuls
                           selectedPop, xα, xβ, xδ

                                                    ;
sortedPop = sort(PS(i));
xα = sortedPop(0);
 xβ = sortedPop(1);
 xδ = sortedPop(2);
 updateZ(xα); //update the reference point;
 updateNeighborhood(xα);
 updateEP(xα);
  }
 }
```
#### *4.6. Updating the External Population (EP)*

After obtaining the best position vector *x<sup>α</sup>* of every subproblem in each generation, the EP needs to be updated. If the condition that F(*xα*) is not dominated by the individuals in the EP, F(*xα*) is added to the EP, and the individuals that are dominated by F(*xα*) are removed.

In the search process of the algorithm, excess individuals are added to the EP. Too many nondominated individuals are not of great significance for solving practical problems, but they increase the difficulty of the data analysis. Therefore, a special truncation strategy is used to maximize the retention of nondominated solution characteristics while maintaining the appropriate EP size. In this strategy, the *k*th nearest neighbor method [36] is used here to evaluate the individuals in the EP, and the calculation approach for the *k*th nearest neighbor distance is as follows.

$$D(i) = \frac{1}{\sigma\_i^k + 2} \tag{26}$$

$$k = \sqrt{|P| + |EP|} \tag{27}$$

where *D*(*i*) is the inverse function of the Euclidean distance from individual *i* to its *k*-th nearest neighbor, which is used to reflect the density information of the objective space. *σ k i* is the *k*th nearest Euclidean distance of individual *i*, where the value of *k* is calculated using Equation (27), and |*P*| and |*EP*| are the population size and external population size, respectively. The smaller the *D*(*i*) value is, the more scattered the solutions are.

#### **5. Computational Experiments and Results Analysis**

NPPLS-JP is a novel multiline scheduling problem derived from a real-world manufacturing workshop; it is an extension of regular flow shop scheduling problems that has no related research. Therefore, there are no benchmarks available for the proposed MOGWO/D algorithm. In this section, test experiments are designed to assess the performance of the proposed MOGWO/D algorithm by comparing the results obtained on the proposed NPPLS-JP problem with those of three other famous multi-objective optimization algorithms, i.e., NSGA-II [37], MOGWO [55], and MOPSO [56], in terms of three metrics. The results illustrate the effectiveness of the proposed MOGWO/D algorithm.

#### *5.1. Evaluation Metrics*

Different from single objective optimization problems, MOPs involve the simultaneous optimization of multiple conflicting objectives. An improvement of one objective results in the deterioration of another objective, so MOP algorithms usually obtain a set of tradeoff solutions in terms of the desired objectives, namely, nondominated solutions. There is no absolute optimum among these solutions, and fitness functions cannot evaluate their effectiveness, so a set of metrics is needed to evaluate the performance of multi-objective algorithms for solving MOPs. If the obtained Pareto front is closer to the Pareto optimal front, covering the extreme regions as much as possible, and the nondominated solutions are uniformly distributed in the obtained Pareto front, this means that the obtained results have better convergence and distribution effects. In this paper, the following three metrics are used:

(1) Generational distance (GD) [57]. The GD is the most common multi-objective indicator for convergence. It is used to calculate the mean Euclidean distance between the obtained Pareto front and the Pareto optimal front. The calculation formula for the GD is as follows.

$$GD = \frac{\sqrt{\sum\_{i=1}^{|OF|} d\_i^2}}{|OF|} \tag{28}$$

where *d<sup>i</sup>* is the Euclidean distance from point *i* of the obtained Pareto front to the closest point in the Pareto optimal front, and |*OF*| is the number of nondominated solutions in the obtained Pareto front; therefore, GD denotes the mean value of the closest distance from each point in the obtained Pareto front to the Pareto optimal front. A smaller GD value indicates that the obtained Pareto front is closer to the Pareto optimal front; namely, the obtained Pareto front has good convergence. When GD equals zero, the obtained Pareto front is located at the Pareto optimal front.

(2) Inversed generational distance (IGD) [58]. This metric is a variant of the GD and is a comprehensive performance indicator. This metric represents the mean Euclidean distance from the points in the Pareto optimal front to the obtained Pareto front. The formulation of the IGD is as follows.

$$IGD = \frac{\sqrt{\sum\_{i=1}^{|PF|} D\_i^2}}{|PF|} \tag{29}$$

where |*PF*| denotes the number of points in the Pareto optimal front and *D<sup>i</sup>* is the Euclidean distance from point *i* in the Pareto optimal front to the closest point in the obtained Pareto front. A smaller IGD value indicates better convergence and diversity for the obtained Pareto front. In our experiments, the nondominated solutions obtained from all independent runs of the four algorithms on each instance are regarded as the Pareto optimal front of that instance.

(3) Spread (∆) [37]. ∆ is the diversity metric of the multi-objective optimization that can measure the distribution and spread of solutions. ∆ is calculated as follows.

$$
\triangle = \frac{\sum\_{j=1}^{m} d\_j^e + \sum\_{i=1}^{n} \left| d\_i - \overline{d} \right|}{\sum\_{j=1}^{m} d\_j^e + n\overline{d}} \tag{30}
$$

where *m* is the number of objectives and *n* is the number of solutions in the obtained Pareto front. *d e j* is the minimum Euclidean distance from the nondominated solutions in the obtained Pareto front to the extreme point *j* of the Pareto optimal front, and *d<sup>i</sup>* is the Euclidean distance of the closest pairwise points in the obtained Pareto front, and *d* is the average value of *d<sup>i</sup>* . A smaller value of ∆ represents a better distribution and increased diversity. The calculation of ∆ is simple and does not require knowledge of the whole Pareto optimal front, it uses only the extreme objectives of the Pareto optimal front.

#### *5.2. Instance Generation*

In the ideal situation, the proposed MOGWO/D algorithm is suitable for all kinds of problems that meet the model definition of NPPLS-JP with different numbers of parallel production lines and jobs, different mixing ratios of job types, and different configurations of production lines. Here, by combining the production system and customers' demand data to generate a set of instances, the performance of the proposed MOGWO/D algorithm is tested. Therefore, we evaluated the effectiveness of the proposed algorithm in different production scenarios by varying the order quantity and order capacity. In the experiment in this section, we only considered the case with two parallel production lines, and gave three production scenarios with different configurations, four different numbers of orders, and three different order capacities; thus, a tally of 3 × 4 × 3 = 36 instances were generated by the combination of the three factors.

The three production scenarios are shown in Figure 4a–c, each with two nonidentical parallel production lines, and each production line consisting of five stages corresponding to five operations of eight types of jobs. The three production scenarios consist of two general flow lines, a general flow production line and a hybrid flow production line, and two hybrid flow lines. The machines of each production scenario are taken from Table 3. Notably, the machine configuration for the first line in Scenario (b) is the same as that of the first line in Scenario (a), and the machines configuration for the second line in Scenario (c) is the same as that of the second line in Scenario (b). All of the parallel machines are identical parallel machines in each line of any production scenario. Table 3 also gives the processing time of each operation of the eight types of jobs on each machine. "\_" indicates that jobs of the current type cannot be processed on the corresponding machines. Therefore, if the next operation of a job cannot be processed on the current line, the jumping process operation will occur. The production line selection information for decoding is shown in Table 4.


**Table 3.** The optional machines and the processing times for each type of job.

**Job Types** 


8 3 1 2 3

**Table 4.** Production line selection information for the experiments. 8 43 43 31 31 42 42 31 31 43 43 34 34 57 57 26 26 56 56 26 26

7 39 39 32 32 40 40 32 32 40 40 33 33 54 54 24 24 54 54 24 24

*Symmetry* **2021**, *13*, x FOR PEER REVIEW 17 of 26

**Table 3.** The optional machines and the processing times for each type of job.

1 41 41 30 30 39 39 34 34 42 42 31 31 − − 26 26 49 49 23 23 2 40 40 31 31 42 42 34 34 40 40 32 32 − − 23 23 50 50 23 23 3 40 40 32 32 38 38 32 32 41 41 32 32 47 47 26 26 52 52 24 24 4 39 39 32 32 41 41 32 32 39 39 29 29 48 48 26 26 52 52 24 24 5 42 42 34 34 40 40 34 34 39 39 30 30 − − 24 24 51 51 24 24

**Stage 1 Stage 2 Stage 3 Stage 4 Stage 5**  

(b)

**Figure 4. Figure 4.** The three different production scenarios. ( The three different production scenarios. ( **aa**) Scenario 1; ( ) Scenario 1; ( **b b** ) Scenario 2; ( ) Scenario 2; (**cc**) Scenario 3. ) Scenario 3.

> The number of orders *O* = {2, 4, 6, 8} and the capacity of order *o* is *No*, where *N<sup>o</sup>* = {10, 20, 30}. As the jobs in each order are of the same type, *O* represents not only the number of orders, but also the number of job types in the production cycle, and both *O* and *N<sup>o</sup>* determine the total number of jobs. Each instance is denoted in the form "*x*\_*y*\_*z*", where *x* represents the production scenario taken from the three scenarios described earlier in Figure 4a–c, *y* is the total number of orders (corresponding to the first *y* types in Table 1), and *z* is the number of jobs in each order. For example, 1\_4\_10 represents the instance in scenario 1 that includes four orders, and there are 10 jobs in each order.

> For each order in the instances, the trapezoidal fuzzy number for the trapezoidal fuzzy due date is generated as follows. First, a uniform distribution *U* - 0.8 × *N<sup>J</sup>* , 3 × *N<sup>J</sup>* is given, where *N<sup>J</sup>* = *N<sup>o</sup>* × *S* ∑ *s*=1 *pt<sup>s</sup>* , *N<sup>o</sup>* is the number of jobs in order *o*, *S* is the number of operations in the job, and *pt<sup>s</sup>* is the maximum processing time for each job operation in order *o* at any available machine, as shown in Table 3. Then, four integers are randomly taken from the uniform distribution *U* - 0.8 × *N<sup>J</sup>* , 3 × *N<sup>J</sup>* . The last four time points are sorted in ascending

order, as required for the trapezoidal fuzzy due date. The earliness/tardiness penalty coefficients of all orders are set to 5 and 15, respectively.

There are 36 instances to be tested using the proposed MOGWO/D algorithm and the three compared algorithms. For experimentation, the four algorithms are all coded in Java, and all instances are run on an HP Pavilion m4 notebook PC with a Windows 10 Professional 64-bit operating System, 8 GB of RAM, and an Intel Core i5 CPU at 2.60 GHz.

#### *5.3. Parameter Settings*

From the 36 instances generated above, we can see that the jobs in each instance have eight different scales: 20, 40, 60, 80, 120, 160, 180, and 240. For instances with different job sizes, different population sizes and numbers of iterations should be set so as to obtain the best optimization performance for the algorithms; thus, four different population sizes are given in Table 5. For fairness, MOGWO and MOPSO used the same encoding and decoding method as the proposed MOGWO/D algorithm. Because of the different mechanisms, the encoding and decoding methods of NSGA-II are slightly different. In the NSGA-II algorithm, Part 1 and Part 2 use a permutation encoding scheme [59] to obtain independent encodings, both of which are natural number permutations. The decoding scheme is similar to that of the proposed MOGWO/D algorithm; the only difference is that the individuals in the population are discrete, and these discrete values are natural number permutations. They can be regarded as ROVs, and then decoded according to the decoding rules in Section 4. Unlike the other three algorithms, the encodings in NSGA-II do not need to be discretized first.


**Table 5.** The parameter settings for the four algorithms.

In addition, in NSGA-II, a binary tournament is used to as a selection operator for Part 1 and Part 2, the partial mapped crossover (PMX) and swap operation are used as the crossover operator of the two parts, and the insert operation is used as the mutation operator of both parts. The other parameter settings of the four algorithms are also shown in Table 5.

Each instance is run 30 times independently for the proposed MOGWO/D algorithm and the other three comparison algorithms.

#### *5.4. Experimental Results Analysis*

Table 6 presents the means and standard deviations of the three metrics for the MOGWO/D algorithm—NSGA-II, MOGWO, and MOPSO. By comparing the GD, IGD, and Spread values of the four multi-objective algorithms, it can be seen in Table 6 that the proposed MOGWO/D algorithm has better results in the vast majority of instances. Specifically, with regard to the convergence metric GD, the MOGWO/D algorithm achieves the best results for 33 instances; it is inferior to MOGWO on the "2\_8\_10" and "2\_8\_20" instances and inferior to MOPSO on "3\_4\_30", but the differences are very small. For the spread metric, MOGWO/D algorithm achieves the optimal scheme in 34 instances; it is only inferior to MOGWO on "2\_8\_10" and inferior to NSGA-II on "3\_4\_30", but still better than MOPSO. Regarding the comprehensive metric IGD, 34 instances obtained the best metrics values with the MOGWO/D algorithm, only instance "2\_ 8\_10" was slightly better when using MOGWO, and NSGA-II is superior to MOGWO/D algorithm on "3\_4\_30". Furthermore, the standard deviations achieved for the three metrics for these instances by the MOGWO/D algorithm were better than those of the other three comparison algorithms in the vast majority of instances.

**Table 6.** Means and deviations of the three metrics obtained by MOGWO/D, NSGA-II, MOGWO, and MOPSO.



**Table 6.** *Cont.*

It is clear that the MOGWO/D algorithm has a better optimization performance for solving NPPLS-JP problems, and meets the needs to solve such problems. This may be because of the better balance of the MOGWO/D algorithm between exploitation and exploration, as well as the strategy in which the three best solutions *xα*, *xβ*, and *x<sup>δ</sup>* can be obtained from different levels of the grey wolves' social hierarchy. The experimental results show that the proposed algorithm is superior to other multi-objective optimization algorithms in solving the NPPLS-JP problem.

To observe the experimental results more intuitively, Figure 5a–j gives the convergence curve of each instance in the instance set, for which the order capacity is 20, and each curve is the best running result according to the comprehensive performance metric IGD over 30 independent runs for the proposed MOGWO/D algorithm and the three other comparison algorithms. It can be seen clearly that the convergence curves of the proposed algorithm are better than those of the comparison algorithms through these curve graphs. Among them, only on instances "3\_2\_20" and "3\_8\_20" is the proposed algorithm similar to the comparison algorithms, and the other instances all yield much better convergence curves than those of the comparison algorithms, which proves the effectiveness of the proposed MOGWO/D algorithm in solving the NPPLS-JP problem.

The NPPLS-JP problem proposed in this paper comes from the actual demand of an axle housing machining workshop. This scheduling demand exists widely in a multivariety mixed production environment, so the proposed model and method are of great significance for production practice. This research has a substantive impact on improving the production efficiency of the workshop, and can significantly enhance the production management level of enterprises, so as to increase the market competitiveness.

**Figure 5.** The convergence curves for the instance set with *z* = 20. (**a**) 1\_2\_20; (**b**) 1\_4\_20; (**c**) 1\_6\_20; (**d**) 1\_8\_20; (**e**) 2\_2\_20; (**f**) 2\_4\_20; (**g**) 2\_6\_20; (**h**) 2\_8\_20; (**i**) 3\_2\_20; (**j**) 3\_4\_20; (**k**) 3\_6\_20; (**l**) 3\_8\_20. **Figure 5.** The convergence curves for the instance set with *z* = 20. (**a**) 1\_2\_20; (**b**) 1\_4\_20; (**c**) 1\_6\_20; (**d**) 1\_8\_20; (**e**) 2\_2\_20; (**f**) 2\_4\_20; (**g**) 2\_6\_20; (**h**) 2\_8\_20; (**i**) 3\_2\_20; (**j**) 3\_4\_20; (**k**) 3\_6\_20; (**l**) 3\_8\_20.

#### The NPPLS-JP problem proposed in this paper comes from the actual demand of an **6. Conclusions**

**6. Conclusions** 

axle housing machining workshop. This scheduling demand exists widely in a multivariety mixed production environment, so the proposed model and method are of great significance for production practice. This research has a substantive impact on In this paper, a multi-objective NPPLS-JP derived from the real-life axle housing machining workshop of an axle manufacturer is studied. In the established NPPLS-JP model, the structures of all parallel lines are symmetrical. However, because of the demands

production management level of enterprises, so as to increase the market competitiveness.

In this paper, a multi-objective NPPLS-JP derived from the real-life axle housing machining workshop of an axle manufacturer is studied. In the established NPPLS-JP

of multivariety mixed production, the process capabilities and production capacities of these parallel production lines are asymmetric, and some types of job operations must be processed on the specific lines. This situation greatly affects the production efficiency of the production system and increases the difficulty of scheduling. To make multivariety mixed production more efficient and to maximize the utilization of the production capacity, a jumping process operation is introduced into the proposed model, which is the largest difference relative to the other general parallel production line scheduling problems. In the NPPLS-JP model, the multiline scheduling, multivariety mixed production, machine eligibility constraints, and MOPs are involved, so it is an NP-hard scheduling problem. In view of this model, we propose a hybrid multi-objective optimization algorithm that incorporates the single-objective GWO into the MOEA/D. The basic idea is to compensate for the shortcomings of the original algorithms by the reasonable mixing of several algorithms to balance their exploration and exploitation of abilities. To verify the effectiveness of the proposed algorithm, a set of instances is designed, and comparative experiments are conducted using the MOGWO/D algorithm as well as three other famous multi-objective optimization algorithms. The experimental results demonstrate that the proposed algorithm is superior to the compared algorithms for solving the NPPLS-JP problem. Furthermore, the experiment also proves that algorithm mixing can improve the performance and expand the application field of the constitutive algorithms.

In future research, we could solve the NPPLS-JP problem under the condition of considering the sequence-dependent setup times, or we could design new metaheuristics to solve the NPPLS-JP problem. Another interesting research direction is to explore new problem-specific rules to improve the performance of the MOGWO/D algorithm in terms of solving the NPPLS-JP. We can also focus on utilizing the MOGWO/D algorithm to solve other workshop scheduling problems, such as job shop scheduling problems and regular flow shop scheduling problems.

**Author Contributions:** Conceptualization, L.Y.; Formal analysis, G.X. and J.M.; Investigation, G.X. and J.L.; Methodology, G.X., L.Y. and J.M.; Project administration, Z.G.; Resources, L.Y.; Software, G.X. and J.L.; Writing—original draft, G.X. and L.Y.; Writing—review & editing, L.Y. and J.M.; funding acquisition, L.Y. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the Youth Program of National Natural Science Foundation of China (Grant No. 51905196) and the National Key R&D Program of China (Grant No. 2018YFB1702700).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare that they have no competing interest.

#### **Abbreviations**

The mathematical modeling notations are listed as follows.



Decision variables


#### **References**


## *Article* **Total Domination in Rooted Product Graphs**

## **Abel Cabrera Martínez and Juan A. Rodríguez-Velázquez \***

Departament d'Enginyeria Informàtica i Matemàtiques, Universitat Rovira i Virgili, Av. Països Catalans 26, 43007 Tarragona, Spain; abel.cabrera@urv.cat

**\*** Correspondence: juanalberto.rodriguez@urv.cat

Received: 2 October 2020; Accepted: 20 November 2020; Published: 23 November 2020

**Abstract:** During the last few decades, domination theory has been one of the most active areas of research within graph theory. Currently, there are more than 4400 published papers on domination and related parameters. In the case of total domination, there are over 580 published papers, and 50 of them concern the case of product graphs. However, none of these papers discusses the case of rooted product graphs. Precisely, the present paper covers this gap in the theory. Our goal is to provide closed formulas for the total domination number of rooted product graphs. In particular, we show that there are four possible expressions for the total domination number of a rooted product graph, and we characterize the graphs reaching these expressions.

**Keywords:** total domination; domination; rooted product graph

Let *G* be a graph. The open neighborhood of a vertex *v* ∈ *V*(*G*) is defined to be *N*(*v*) = {*u* ∈ *V*(*G*) : *u* is adjacent to *v*}. A set *S* ⊆ *V*(*G*) is a dominating set of *G* if *N*(*v*) ∩ *S* 6= ∅ for every vertex *v* ∈ *V*(*G*) \ *S*. Let D(*G*) be the set of dominating sets of *G*. The domination number of *G* is defined to be,

$$\gamma(G) = \min\{|S| \colon S \in \mathcal{D}(G)\}.$$

A set *S* ⊆ *V*(*G*) is a total dominating set, TDS, of a graph *G* without isolated vertices if every vertex *v* ∈ *V*(*G*) is adjacent to at least one vertex in *S*. Let D*t*(*G*) be the set of total dominating sets of *G*.

The total domination number of *G* is defined to be,

$$\gamma\_t(G) = \min\{|S| \, : \, S \in \mathcal{D}\_t(G)\}.$$

By definition, D*t*(*G*) ⊆ D(*G*), so that *γ*(*G*) ≤ *γt*(*G*).

We define a *γt*(*G*)-set as a set *S* ∈ D*t*(*G*) with |*S*| = *γt*(*G*). The same agreement will be assumed for optimal parameters associated with other characteristic sets defined in the paper. For instance, a *γ*(*G*)-set will be a set *S* ∈ D(*G*) with |*S*| = *γ*(*G*).

The theory of domination in graphs has been extensively studied. For instance, there are more than 4400 papers already published on domination and related parameters. In particular, we cite the following books [1,2]. In the case of total domination, there are over 580 published papers and one book [3]. Among these papers on total domination in graphs, there are over 50 which concern the case of product graphs. Surprisingly, none of these papers discusses the case of rooted product graphs. The present paper covers that gap in the theory.

In order to present our results, we need to introduce some additional notation and terminology. The closed neighborhood of *v* ∈ *V*(*G*) is defined to be *N*[*v*] = *N*(*v*) ∪ {*v*}. A vertex *v* ∈ *V*(*G*) is universal if *N*[*v*] = *V*(*G*), while it is a leaf if |*N*(*v*)| = 1. The set of leaves of *G* will be denoted by L(*G*). A support vertex is a vertex *v* with *N*(*v*) ∩ L(*G*) 6= ∅. The set of support vertices of *G* will be denoted by S(*G*). If *v* is a vertex of a graph *G*, then the vertex-deletion subgraph *G* − {*v*} is the

subgraph of *G* induced by *V*(*G*) \ {*v*}. By analogy, we define the subgraph *G* − *S* for an arbitrary subset *S* ⊆ *V*(*G*).

The concept of rooted product graph was introduced in 1978 by Godsil and McKay [4]. Given a graph *G* of order n(*G*) and a graph *H* with root vertex *v*, the rooted product graph *G* ◦*<sup>v</sup> H* is defined as the graph obtained from *G* and *H* by taking one copy of *G* and n(*G*) copies of *H* and identifying the *i th* vertex of *G* with the root vertex *v* in the *i th* copy of *<sup>H</sup>* for every *<sup>i</sup>* ∈ {1, 2, . . . , <sup>n</sup>(*G*)}. If *<sup>H</sup>* or *<sup>G</sup>* is a trivial graph, then *G* ◦*<sup>v</sup> H* is equal to *G* or *H*, respectively. In this sense, hereafter we will only consider graphs *G* and *H* with no isolated vertex.

**Figure 1.** The set of black-coloured vertices forms a *γt*(*G* ◦*<sup>v</sup> H*)-set.

Figure 1 shows an example of a rooted product graph. In this case, the set of black-coloured vertices forms a TDS of *G* ◦*<sup>v</sup> H* and *γt*(*G* ◦*<sup>v</sup> H*) = 14 = *γ*(*G*) + n(*G*)(*γt*(*H*) − 1).

For every *x* ∈ *V*(*G*), *H<sup>x</sup>* ∼= *H* will denote the copy of *H* in *G* ◦*<sup>v</sup> H* containing *x*. The restriction of any set *S* ⊆ *V*(*G* ◦*<sup>v</sup> H*) to *V*(*Hx*) will be denoted by *Sx*, and the restriction to *V*(*H<sup>x</sup>* − {*x*}) will be denoted by *S* − *x* ; i.e., *S<sup>x</sup>* = *S* ∩ *V*(*Hx*) and *S* − *<sup>x</sup>* = *S<sup>x</sup>* \ {*x*}. In some cases, we will need to define *<sup>S</sup>* ⊆ *<sup>V</sup>*(*<sup>G</sup>* ◦*<sup>v</sup> <sup>H</sup>*) from the sets *<sup>S</sup><sup>x</sup>* ⊆ *<sup>V</sup>*(*Hx*) as *<sup>S</sup>* = ∪*x*∈*V*(*G*)*Sx*.

Since *<sup>V</sup>*(*<sup>G</sup>* ◦*<sup>v</sup> <sup>H</sup>*) = ∪*x*∈*V*(*G*)*V*(*Hx*), we have that for every set *<sup>S</sup>* ⊆ *<sup>V</sup>*(*<sup>G</sup>* ◦*<sup>v</sup> <sup>H</sup>*),

$$|\mathcal{S}| = \sum\_{\mathbf{x} \in V(\mathcal{G})} |\mathcal{S}\_{\mathbf{x}}| = \sum\_{\mathbf{x} \in V(\mathcal{G})} |\mathcal{S}\_{\mathbf{x}}^{-}| + |\mathcal{S} \cap V(\mathcal{G})|.\tag{1}$$

A basic problem in the study of product graphs consists of finding closed formulas or sharp bounds for specific invariants of the product of two graphs and expressing these in terms of parameters of the graphs involved in the product. In this sense, for recent results on rooted product graphs, we cite the following works [5–19]. As we can expect, the products of graphs are not alien to applications in other fields. In particular, in [5] the authors show that several important classes of chemical graphs can be expressed as rooted product graphs, and as described in [20], there exist a number of molecular graphs of high-tech interest that can be generated using the rooted product of graphs.

#### **1. Closed Formulas for the Total Domination Number**

The following three lemmas will be the main tools to deduce our results.

**Lemma 1.** *Given a graph H with no isolated vertex and any v* ∈ *V*(*H*) \ S(*H*)*, the following statements hold.*

	- (a) *N*(*v*) ∩ *S* = ∅ *for every γt*(*H* − {*v*})*-set S.*

**Proof.** Let *v* ∈ *V*(*H*) \ S(*H*) and *S* a *γt*(*H* − {*v*})-set. For every *u* ∈ *N*(*v*) we have that *S* ∪ {*u*} is a TDS of *H*, which implies that *γt*(*H*) ≤ |*S* ∪ {*u*}| ≤ *γt*(*H* − {*v*}) + 1. Therefore, (i) follows.

Now, in order to prove (ii), we assume that |*S*| = *γt*(*H*) − 1. If there exists a vertex *y* ∈ *N*(*v*) ∩ *S*, then *S* is also a TDS of *H*, which is a contradiction. Therefore, *N*(*v*) ∩ *S* = ∅ and so (a) follows. In addition, for any *y* ∈ *N*(*v*), the set *S* ∪ {*y*} is a *γt*(*H*)-set not containing *v*. Therefore, (b) also follows.

Finally, we proceed to prove (iii). If there exists a *γt*(*H*)-set *D* such that *v* ∈/ *D*, then *D* is also a TDS of *H* − {*v*}, and so *γt*(*H* − {*v*}) ≤ |*D*| = *γt*(*H*). Therefore, we conclude that if *γt*(*H* − {*v*}) > *γt*(*H*), then *v* ∈ *D* for every *γt*(*H*)-set *D*, which completes the proof.

**Lemma 2.** *Let H be a graph and v* ∈ *V*(*H*)*. If v is not a universal vertex and H* − *N*[*v*] *does not have isolated vertices, then*

$$
\gamma\_t(H - N[v]) \ge \gamma\_t(H) - 2.
$$

*Furthermore, if γt*(*H* − {*v*}) = *γt*(*H*) − 1*, then*

$$
\gamma\_t(H) - 2 \le \gamma\_t(H - N[v]) \le \gamma\_t(H) - 1.
$$

**Proof.** Assume that *v* is not a universal vertex and *H* − *N*[*v*] does not have isolated vertices. Let *S* be a *γt*(*H* − *N*[*v*])-set and *u* ∈ *N*(*v*). Since *S* ∪ {*u*, *v*} is a TDS of *H*, we have that *γt*(*H*) ≤ |*S* ∪ {*u*, *v*}| = *γt*(*H* − *N*[*v*]) + 2, as required.

Now, assume *γt*(*H* − {*v*}) = *γt*(*H*) − 1. In this case, by Lemma 1 (ii) we have that *N*(*v*) ∩ *D* = ∅ for every *γt*(*H* − {*v*})-set *D*, which implies that *D* is a TDS of *H* − *N*[*v*], and so *γt*(*H* − *N*[*v*]) ≤ |*D*| = *γt*(*H* − {*v*}) = *γt*(*H*) − 1. Therefore, the result follows.

**Lemma 3.** *Given a γt*(*G* ◦*<sup>v</sup> H*)*-set S and a vertex x* ∈ *V*(*G*)*, the following statements hold.*

(i) |*Sx*| ≥ *γt*(*H*) − 1. (ii) *If* |*Sx*| = *γt*(*H*) − 1*, then N*(*x*) ∩ *S<sup>x</sup>* = ∅*.*

**Proof.** Let *x* ∈ *V*(*G*). Notice that every vertex in *V*(*Hx*) \ {*x*} is adjacent to some vertex in *Sx*. For any *y* ∈ *N*(*x*) ∩ *V*(*Hx*), the set *S<sup>x</sup>* ∪ {*y*} is a TDS of *Hx*, and so *γt*(*H*) = *γt*(*Hx*) ≤ |*S<sup>x</sup>* ∪ {*y*}| = |*Sx*| + 1. Therefore, (i) follows.

Finally, assume that |*Sx*| = *γt*(*H*) − 1. If there exists a vertex *y* ∈ *N*(*x*) ∩ *Sx*, then *S<sup>x</sup>* is a TDS of *Hx*, which is a contradiction. Therefore, *N*(*x*) ∩ *S<sup>x</sup>* = ∅, and so (ii) follows.

Given a *γt*(*G* ◦*<sup>v</sup> H*)-set *S*, we define the following subsets of *V*(*G*) associated with *S*.

$$\mathcal{A}\_{\mathcal{S}} = \{ \mathbf{x} \in V(\mathcal{G}) : |\mathcal{S}\_{\mathbf{x}}| \ge \gamma\_l(H) \} \text{ and } \mathcal{B}\_{\mathcal{S}} = \{ \mathbf{x} \in V(\mathcal{G}) : |\mathcal{S}\_{\mathbf{x}}| = \gamma\_l(H) - 1 \}.$$

These sets will play an important role in the inference results. By Lemma 3, *V*(*G*) = A*<sup>S</sup>* ∪ B*S*. In particular, if A*<sup>S</sup>* = ∅, then *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)(*γt*(*H*) − 1), and as we will show in the proof of Theorem 2, if B*<sup>S</sup>* = ∅, then *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)*γt*(*H*). As we can expect, these are the extreme values of *γt*(*G* ◦*<sup>v</sup> H*).

**Theorem 1.** *For any graphs G and H with no isolated vertex and any v* ∈ *V*(*H*)*,*

$$\mathfrak{n}(\mathbb{G})(\gamma\_t(H) - 1) \le \gamma\_t(\mathbb{G} \circ\_v H) \le \mathfrak{n}(\mathbb{G})\gamma\_t(H).$$

*Furthermore, if γt*(*H* − {*v*}) = *γt*(*H*) − 1*, then*

$$
\gamma\_t(G \circ\_\upsilon H) \le \gamma\_t(G) + \mathfrak{n}(G)(\gamma\_t(H) - 1).
$$

**Proof.** The lower bound follows from Lemma 3, as for any *γt*(*G* ◦*<sup>v</sup> H*)-set *S*,

$$\gamma\_t(G\circ\_v H) = |\mathbb{S}| = \sum\_{\mathbf{x}\in V(G)} |\mathbb{S}\_{\mathbf{x}}| \ge \mathbf{n}(G)(\gamma\_t(H) - 1).$$

Now, we proceed to prove the upper bound. Let *D* ⊆ *V*(*G* ◦*<sup>v</sup> H*) such that *D<sup>x</sup>* is a *γt*(*Hx*)-set for every *x* ∈ *V*(*G*). It is readily seen that *D* is a TDS of *G* ◦*<sup>v</sup> H*. Hence,

$$\gamma\_t(\mathcal{G}\circ\_\upsilon H) \le |D| = \sum\_{\mathbf{x}\in V(\mathcal{G})} |D\_\mathbf{x}| = \sum\_{\mathbf{x}\in V(\mathcal{G})} \gamma\_t(H\_\mathbf{x}) = \mathbf{n}(\mathcal{G})\gamma\_t(H).$$

From now on, assume *γt*(*H* − {*v*}) = *γt*(*H*) − 1. Notice that, by assumption, *H* − {*v*} does not have isolated vertices.

Let *W* ⊆ *V*(*G* ◦*<sup>v</sup> H*) such that *W*<sup>−</sup> *<sup>x</sup>* = *W<sup>x</sup>* \ {*x*} is a *γt*(*H<sup>x</sup>* − {*x*})-set for every *x* ∈ *V*(*G*) and *W* ∩ *V*(*G*) is a *γt*(*G*)-set. Clearly, *W* is a TDS of *G* ◦*<sup>v</sup> H*, which implies that

$$\gamma\_t(\mathcal{G}\circ\_{\upsilon}H) \le |\mathcal{W}\cap V(\mathcal{G})| + \sum\_{\mathbf{x}\in V(\mathcal{G})} |\mathcal{W}\_{\mathbf{x}}^{-}| \\ = \gamma\_t(\mathcal{G}) + \sum\_{\mathbf{x}\in V(\mathcal{G})} \gamma\_t(H\_{\mathbf{x}} - \{\mathbf{x}\}) \\ = \gamma\_t(\mathcal{G}) + \mathfrak{n}(\mathcal{G})(\gamma\_t(H) - 1).$$

Therefore, the result follows.

The following lemma is another important tool for determining all possible values of *γt*(*G* ◦*<sup>v</sup> H*).

**Lemma 4.** *Given a γt*(*G* ◦*<sup>v</sup> H*)*-set S with* B*<sup>S</sup>* 6= ∅*, the following statements hold.*


$$
\gamma(G) + \mathfrak{n}(G)(\gamma\_t(H) - 1) \le \gamma\_t(G \circ\_v H) \le \gamma\_t(G) + \mathfrak{n}(G)(\gamma\_t(H) - 1).
$$

**Proof.** First, we proceed to prove (i). Given a fixed *x* <sup>0</sup> ∈ B*<sup>S</sup>* ∩ *S*, let *D* ⊆ *V*(*G* ◦*<sup>v</sup> H*) such that for every *x* ∈ *V*(*G*) the set *D<sup>x</sup>* is induced by *S<sup>x</sup>* <sup>0</sup> . Obviously, *D* is a TDS of *G* ◦*<sup>v</sup> H*. Hence, *γt*(*G* ◦*<sup>v</sup> H*) ≤ |*D*| = <sup>∑</sup>*x*∈*V*(*G*) |*Dx*| = n(*G*)|*S<sup>x</sup>* <sup>0</sup>| = n(*G*)(*γt*(*H*) − 1). Therefore, Theorem 1 leads to *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)(*γt*(*H*) − 1).

In order to prove (ii), assume that B*<sup>S</sup>* ∩ *S* = ∅, and let *x* ∈ B*S*. By Lemma 3 we have that *N*[*x*] ∩ *S<sup>x</sup>* = ∅. So, *x* ∈ S / (*Hx*) and *S<sup>x</sup>* is a TDS of *H<sup>x</sup>* − {*x*}. Hence, *γt*(*H* − {*v*}) = *γt*(*H<sup>x</sup>* − {*x*}) ≤ |*Sx*| = *γt*(*H*) − 1, and so Lemma 1 leads to *γt*(*H* − {*v*}) = *γt*(*H*) − 1. Therefore, by Theorem 1 we have that *γt*(*G* ◦*<sup>v</sup> H*) ≤ *γt*(*G*) + n(*G*)(*γt*(*H*) − 1).

Moreover, since *N*[*x*] ∩ *S<sup>x</sup>* = ∅ for every *x* ∈ B*S*, we have that A*<sup>S</sup>* is a dominating set of *G*. Hence,

$$\begin{aligned} \gamma\_t(\mathcal{G}\circ\_v H) &= \sum\_{\mathbf{x}\in\mathcal{A}\_S} |\mathcal{S}\_\mathbf{x}| + \sum\_{\mathbf{x}\in\mathcal{B}\_S} |\mathcal{S}\_\mathbf{x}| \\ &\ge \quad |\mathcal{A}\_S|\gamma\_t(H) + |\mathcal{B}\_S|(\gamma\_t(H) - 1) \\ &\ge \quad |\mathcal{A}\_S| + \mathfrak{n}(G)(\gamma\_t(H) - 1) \\ &\ge \quad \gamma(\mathcal{G}) + \mathfrak{n}(G)(\gamma\_t(H) - 1). \end{aligned}$$

Therefore, the result follows.

Next we give one of the main results of this section, which states the four possible values of *γt*(*G* ◦*<sup>v</sup> H*).

**Theorem 2.** *Let G and H be two graphs with no isolated vertex. For any v* ∈ *V*(*H*)*,*

$$\gamma\_l(\mathcal{G}\circ\_{\boldsymbol{\nu}}H) \in \{\mathbf{n}(\mathcal{G})(\gamma\_l(H)-1), \gamma(\mathcal{G})+\mathbf{n}(\mathcal{G})(\gamma\_l(H)-1), \gamma\_l(\mathcal{G})+\mathbf{n}(\mathcal{G})(\gamma\_l(H)-1), \mathbf{n}(\mathcal{G})\gamma\_l(H)\}.$$

**Proof.** Let *S* be a *γt*(*G* ◦*<sup>v</sup> H*)-set and consider the subsets A*S*, B*<sup>S</sup>* ⊆ *V*(*G*) associated with *S*. We distinguish the following cases.

Case 1. B*<sup>S</sup>* = ∅. In this case, for any *x* ∈ *V*(*G*) we have that |*Sx*| ≥ *γt*(*H*), and as a consequence, *<sup>γ</sup>t*(*<sup>G</sup>* ◦*<sup>v</sup> <sup>H</sup>*) = <sup>∑</sup>*x*∈*V*(*G*) |*Sx*| ≥ n(*G*)*γt*(*H*). Thus, Theorem 1 leads to the equality *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)*γt*(*H*).

Case 2. B*<sup>S</sup>* 6= ∅. If B*<sup>S</sup>* ∩ *S* 6= ∅, then from Lemma 4 (i) we have that *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)(*γt*(*H*) − 1). From now on we assume that B*<sup>S</sup>* ∩ *S* = ∅. Hence, Lemma 4 (ii) leads to

$$
\gamma(G) + \mathfrak{n}(G)(\gamma\_t(H) - 1) \le \gamma\_t(G \circ\_\upsilon H) \le \gamma\_t(G) + \mathfrak{n}(G)(\gamma\_t(H) - 1).
$$

We only need to prove that *γt*(*G* ◦*<sup>v</sup> H*) only can take the extreme values. To this end, we shall need to introduce the following notation. Let A<sup>0</sup> *<sup>S</sup>* = {*x* ∈ A*<sup>S</sup>* : |*Sx*| = *γt*(*H*)} and A<sup>00</sup> *<sup>S</sup>* = A*<sup>S</sup>* \ A<sup>0</sup> *S* .

Subcase 2.1. There exists *x* <sup>0</sup> ∈ A<sup>0</sup> *S* such that *S<sup>x</sup>* <sup>0</sup> is a *γt*(*H<sup>x</sup>* <sup>0</sup>)-set containing *x* 0 . From a fixed vertex *y* ∈ B*<sup>S</sup>* and any *γ*(*G*)-set *D*, we can construct a set *W* ⊆ *V*(*G* ◦*<sup>v</sup> H*) as follows. If *x* ∈ *D*, then *W<sup>x</sup>* is induced by *Sx* <sup>0</sup> , while if *x* ∈ *V*(*G*) \ *D*, then *W<sup>x</sup>* is induced by *Sy*. Notice that *W* is a TDS of *G* ◦*<sup>v</sup> H*, which implies that *γt*(*G* ◦*<sup>v</sup> H*) ≤ |*W*| = *γ*(*G*) + n(*G*)(*γt*(*H*) − 1). Therefore, *γt*(*G* ◦*<sup>v</sup> H*) = *γ*(*G*) + n(*G*)(*γt*(*H*) − 1).

Subcase 2.2. A<sup>0</sup> *<sup>S</sup>* = ∅ or for any *x* ∈ A<sup>0</sup> *S* , either *S<sup>x</sup>* is not a *γt*(*Hx*)-set or *x* 6∈ *Sx*. If A<sup>0</sup> *S* 6= ∅, then every vertex *x* ∈ A<sup>0</sup> *S* satisfies one of the following conditions.


Notice that we do not consider the case where *S<sup>x</sup>* is not a TDS of *H<sup>x</sup>* and *x* 6∈ *Sx*, as in this case we can replace *S* with the *γt*(*G* ◦*<sup>v</sup> H*)-set (*S* \ *Sx*) ∪ *S* 0 *x* for some *γt*(*Hx*)-set *S* 0 *x* . In such a case, if *x* ∈ *S* 0 *x* , then we proceed as in Subcase 2.1, while if *x* 6∈ *S* 0 *x* , then *x* satisfies (a).

Let us construct a TDS *X* of *G* as follows.


We proceed to show that *X* is a TDS of *G*. If *x* ∈ *V*(*G*) \ *X*, then either *x* ∈ B*<sup>S</sup>* or *x* ∈ A<sup>0</sup> *S* \ *S*. If *x* ∈ B*S*, then *N*(*x*) ∩ *S* ∩ A*<sup>S</sup>* 6= ∅, which implies that *N*(*x*) ∩ *X* 6= ∅. Obviously, if *x* ∈ A<sup>0</sup> *S* \ *S*, then *N*(*x*) ∩ *X* 6= ∅, by definition of *X*. Now, let *x* ∈ *X*. If *x* ∈ A<sup>00</sup> *<sup>S</sup>* ∪ (A<sup>0</sup> *S* \ *S*), then *N*(*x*) ∩ *X* 6= ∅ by definition. If *x* ∈ A<sup>0</sup> *<sup>S</sup>* ∩ *S*, then *x* satisfies condition (b). This implies that *N*(*x*) ∩ *S<sup>x</sup>* = ∅. Hence, there exists a vertex *y* ∈ *N*(*x*) ∩ *V*(*G*) ∩ *S* ⊆ *X*, as desired.

Therefore, *X* is a TDS of *G*, which implies that *γt*(*G*) ≤ |*X*| ≤ 2|A<sup>00</sup> *S* | + |A<sup>0</sup> *S* |. Thus,

$$\begin{split} \gamma\_{\mathfrak{l}}(\operatorname{G}\circ\_{\mathfrak{v}}H) &\geq \sum\_{\mathbf{x}\in\mathcal{A}\_{\operatorname{S}}''}|\operatorname{S}\_{\mathbf{x}}| + \sum\_{\mathbf{x}\in\mathcal{A}\_{\operatorname{S}}'}|\operatorname{S}\_{\mathbf{x}}| + \sum\_{\mathbf{x}\in\mathcal{B}\_{\operatorname{S}}}|\operatorname{S}\_{\mathbf{x}}| \\ &\geq |\mathcal{A}\_{\operatorname{S}}''|(\gamma\_{\mathfrak{l}}(H) + 1) + |\mathcal{A}\_{\operatorname{S}}'|\gamma\_{\mathfrak{l}}(H) + |\mathcal{B}\_{\operatorname{S}}|(\gamma\_{\mathfrak{l}}(H) - 1) \\ &\geq (\mathfrak{L}|\mathcal{A}\_{\operatorname{S}}''| + |\mathcal{A}\_{\operatorname{S}}'|) + \mathfrak{n}(G)(\gamma\_{\mathfrak{l}}(H) - 1) \\ &\geq \gamma\_{\mathfrak{l}}(G) + \mathfrak{n}(G)(\gamma\_{\mathfrak{l}}(H) - 1) . \end{split}$$

which completes the proof.

Later on, we will characterize the graphs that reach each of the previous expressions. However, we have to admit that when applying some of these characterizations we will need to calculate the total domination number of *H* − {*v*} or *H* − *N*[*v*] which may not be easy. Before giving the above

mentioned characterizations, we shall show a simple example in which we can observe that these expressions of *γt*(*G* ◦*<sup>v</sup> H*) are realizable.

**Example 1.** *Let G be a graph with no isolated vertex. If H is one of the graphs shown in Figure 2, then the resulting values of γt*(*G* ◦*<sup>v</sup> H*) *for some specific roots are described below.*


*For these cases, it is not difficult to construct a γt*(*G* ◦*<sup>v</sup> H*)*-set. For instance, a γt*(*G* ◦*<sup>v</sup> H*2)*-set S can be formed as follows. Given a fixed γ*(*G*)*-set X, we take S in such a way that the set S<sup>x</sup> is induced by* {*a*, *b*, *v* 0 , *v*} *for every x* ∈ *X, and induced by* {*a*, *b*, *c*} *for every x* ∈ *V*(*G*) \ *X.*

**Figure 2.** The set of black-coloured vertices forms a *γt*(*H<sup>i</sup>* )-set for *i* ∈ {1, 2}. The set {*v* 0 , *v* <sup>00</sup>} forms a *γt*(*H*<sup>1</sup> − {*v*})-set, while {*a*, *b*, *c*} forms a *γt*(*H*<sup>2</sup> − {*v*})-set.

As we have observed in Lemma 2, if *v* ∈ *V*(*H*) is not a universal vertex and *H* − *N*[*v*] does not have isolated vertices, then *γt*(*H* − *N*[*v*]) ≥ *γt*(*H*) − 2. Next we show that the extreme case *γt*(*H* − *N*[*v*]) = *γt*(*H*) − 2 characterizes the graphs with *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)(*γt*(*H*) − 1).

**Theorem 3.** *Given two graphs G and H with no isolated vertex and v* ∈ *V*(*H*)*, the following statements are equivalent.*

(i) *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)(*γt*(*H*) − 1).

(ii) *v is a universal vertex of H or γt*(*H* − *N*[*v*]) = *γt*(*H*) − 2.

**Proof.** First, assume that (i) holds. Let *S* be a *γt*(*G* ◦*<sup>v</sup> H*)-set. If *v* is a universal vertex of *H*, then we are done. Assume that *v* ∈ *V*(*H*) is not a universal vertex. In this case, Lemma 3 leads to B*<sup>S</sup>* = *V*(*G*) and *N*(*x*) ∩ *S<sup>x</sup>* = ∅ for every *x* ∈ B*S*. Thus, B*<sup>S</sup>* ∩ *S* is a dominating set of *G* and for any *x* ∈ B*<sup>S</sup>* ∩ *S* we have that *H<sup>x</sup>* − *N*[*x*] does not have isolated vertices and *S<sup>x</sup>* \ {*x*} is a TDS of *H<sup>x</sup>* − *N*[*x*], which implies that *γt*(*H* − *N*[*v*]) = *γt*(*H<sup>x</sup>* − *N*[*x*]) ≤ |*S<sup>x</sup>* \ {*x*}| = *γt*(*H*) − 2. Hence, Lemma 2 leads to *γt*(*H* − *N*[*v*]) = *γt*(*H*) − 2. Therefore, (ii) follows.

Conversely, assume that (ii) holds. If *v* is a universal vertex of *H*, then *V*(*G*) is a TDS of *G* ◦*<sup>v</sup> H*, which implies that *γt*(*G* ◦*<sup>v</sup> H*) ≤ |*V*(*G*)| = n(*G*) = n(*G*)(*γt*(*H*) − 1). Thus, by Theorem 1 we conclude that *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)(*γt*(*H*) − 1).

From now on, we assume that *v* is not a universal vertex. For any *x* ∈ *V*(*G*), let *D*<sup>0</sup> *<sup>x</sup>* be a *γt*(*H<sup>x</sup>* − *N*[*x*])-set and *D<sup>x</sup>* = *D*<sup>0</sup> *<sup>x</sup>* ∪ {*x*}. Observe that *<sup>D</sup>* = ∪*x*∈*V*(*G*)*D<sup>x</sup>* is a TDS of *<sup>G</sup>* ◦*<sup>v</sup> <sup>H</sup>*, which implies that *γt*(*G* ◦*<sup>v</sup> H*) ≤ |*D*| = n(*G*)(*γt*(*H* − *N*[*v*]) + 1) = n(*G*)(*γt*(*H*) − 1). By Theorem 1 we conclude that *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)(*γt*(*H*) − 1), which completes the proof.

**Lemma 5.** *Let G and H be two graphs with no isolated vertex and v* ∈ *V*(*H*) \ S(*H*)*. If γt*(*H* − {*v*}) ≥ *γt*(*H*)*, then*

$$\gamma\_t(G\diamond\_{\upsilon} H) \in \{\mathfrak{n}(G)\gamma\_t(H), \mathfrak{n}(G)(\gamma\_t(H)-1)\}.$$

**Proof.** By Theorem 1 we have that *γt*(*G* ◦*<sup>v</sup> H*) ≤ n(*G*)*γt*(*H*). Let *S* be a *γt*(*G* ◦*<sup>v</sup> H*)-set. If |*S*| = n(*G*)*γt*(*H*), then we are done. Suppose that |*S*| < n(*G*)*γt*(*H*). Hence, there exists *x* ∈ *V*(*G*) such that |*Sx*| < *γt*(*H*), which implies that *x* ∈ B*<sup>S</sup>* by Lemma 3. Since *γt*(*H* − {*v*}) ≥ *γt*(*H*), Lemma 4 (ii) leads to *x* ∈ *S*, and by Lemma 4 (i) we deduce that *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)(*γt*(*H*) − 1).

**Lemma 6.** *Let G and H be two graphs with no isolated vertex and v* ∈ *V*(*H*)*. If v belongs to every γt*(*H*)*-set, then*

$$\gamma\_t(G \diamond\_{\upsilon} H) \in \{ \mathbf{n}(G)\gamma\_t(H), \mathbf{n}(G)(\gamma\_t(H) - 1) \}.$$

**Proof.** We first consider the case where *v* ∈ *V*(*H*) \ S(*H*). By Lemma 1 we deduce that *γt*(*H* − {*v*}) ≥ *γt*(*H*), and so Lemma 5 leads to the result. Now, assume that *v* ∈ S(*H*) and let *S* be a *γt*(*G* ◦ *H*)-set. If *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)*γt*(*H*), then we are done. Thus, we assume that *γt*(*G* ◦*<sup>v</sup> H*) < n(*G*)*γt*(*H*). In such a case, there exists *x* ∈ B*S*, and since *x* ∈ S(*Hx*), it follows that *x* ∈ S(*G* ◦ *H*). Therefore, *x* ∈ *S*, and by Lemma 4 (i) we deduce that *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)(*γt*(*H*) − 1), which completes the proof.

We are now ready to characterize the graphs with *γt*(*G* ◦*<sup>v</sup> H*) = *γ*(*G*) + n(*G*)(*γt*(*H*) − 1).

**Theorem 4.** *Let G and H be two graphs with no isolated vertex and v* ∈ *V*(*H*)*. The following statements are equivalent.*

(i) *γt*(*G* ◦*<sup>v</sup> H*) = *γ*(*G*) + n(*G*)(*γt*(*H*) − 1).

(ii) *γt*(*H* − *N*[*v*]) = *γt*(*H* − {*v*}) = *γt*(*H*) − 1*, and in addition, γt*(*G*) = *γ*(*G*) *or there exists a γt*(*H*)*-set D such that v* ∈ *D.*

**Proof.** First, assume that (i) holds. Since 1 ≤ *γ*(*G*) < n(*G*), by Lemma 6, *v* 6∈ S(*H*), so that from Lemma 5 we deduce that *γt*(*H* − {*v*}) ≤ *γt*(*H*) − 1 and Lemma 1 leads to *γt*(*H* − {*v*}) = *γt*(*H*) − 1. Hence, by Lemma 2 it follows that *γt*(*H* − *N*[*v*]) ∈ {*γt*(*H*) − 2, *γt*(*H*) − 1} and by Theorem 3 we obtain that *γt*(*H* − *N*[*v*]) = *γt*(*H*) − 1.

Now, let *S* be a *γt*(*G* ◦*<sup>v</sup> H*)-set. Since 1 ≤ *γ*(*G*) < n(*G*), Lemma 3 leads to A*<sup>S</sup>* 6= ∅ and B*<sup>S</sup>* 6= ∅. Additionally, by Lemma 4 we deduce that B*<sup>S</sup>* ∩ *S* = ∅, and by Lemma 3 we have that *N*(*x*) ∩ *S<sup>x</sup>* = ∅ for every *x* ∈ B*S*. Hence, A*<sup>S</sup>* is a dominating set of *G* and A*<sup>S</sup>* ∩ *S* 6= ∅. Thus, *γt*(*G* ◦*<sup>v</sup> H*) ≥ |A*S*| + n(*G*)(*γt*(*H*) − 1) ≥ *γ*(*G*) + n(*G*)(*γt*(*H*) − 1) = *γt*(*G* ◦*<sup>v</sup> H*), which implies that A*<sup>S</sup>* is a *γ*(*G*)-set and for every *x* ∈ A*<sup>S</sup>* ∩ *S* we have that |*Sx*| = *γt*(*H*). Therefore, there exists *x* ∈ A*<sup>S</sup>* ∩ *S* such that *S<sup>x</sup>* is a *γt*(*Hx*)-set or A*<sup>S</sup>* is a *γt*(*G*)-set, which implies that (ii) holds.

Conversely, assume that (ii) holds. As above, let *S* be a *γt*(*G* ◦*<sup>v</sup> H*)-set. Since *γt*(*H* − {*v*}) = *γt*(*H*) − 1, by Theorem 1, *γt*(*G* ◦*<sup>v</sup> H*) ≤ *γt*(*G*) + n(*G*)(*γt*(*H*) − 1).

Suppose that B*<sup>S</sup>* = ∅. In such a case, *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)*γt*(*H*), which implies that *γ*(*G*) < *γt*(*G*) = n(*G*), and so *G* ∼= ∪*K*2. Let *A* ∪ *B* = *V*(*G*) be the bipartition of the vertex set of *G*, i.e., every edge has one endpoint in *A* and the other one in *B*. Thus, for every *x* ∈ *V*(*G*) we define a subset *Y<sup>x</sup>* ⊆ *V*(*Hx*) as follows. If *x* ∈ *A*, then *Y<sup>x</sup>* is a *γt*(*Hx*)-set which contains *x*, while if *x* ∈ *B*, then *<sup>Y</sup><sup>x</sup>* is a *<sup>γ</sup>t*(*H<sup>x</sup>* − {*x*})-set. Hence, *<sup>Y</sup>* = ∪*x*∈*V*(*G*)*Y<sup>x</sup>* is a TDS of *<sup>G</sup>* ◦*<sup>v</sup> <sup>H</sup>* and so *<sup>γ</sup>t*(*<sup>G</sup>* ◦*<sup>v</sup> <sup>H</sup>*) ≤ |*Y*| = n(*G*)*γt*(*H*) − n(*G*) <sup>2</sup> < n(*G*)*γt*(*H*), which is a contradiction. From now on we assume that B*<sup>S</sup>* 6= ∅.

If there exists a vertex *x* ∈ B*<sup>S</sup>* ∩ *S*, then by Lemma 3 we have that *N*(*x*) ∩ *S<sup>x</sup>* = ∅, which implies that *S<sup>x</sup>* \ {*x*} is a TDS of *H<sup>x</sup>* − *N*[*x*]. Hence, *γt*(*H* − *N*[*v*]) = *γt*(*H<sup>x</sup>* − *N*[*x*]) ≤ |*S<sup>x</sup>* \ {*x*}| = *γt*(*H*) − 2, which is a contradiction with the assumption *γt*(*H* − *N*[*v*]) = *γt*(*H*) − 1. Therefore, B*<sup>S</sup>* ∩ *S* = ∅, and by Lemma 4 we deduce that *γt*(*G* ◦*<sup>v</sup> H*) ≥ *γ*(*G*) + n(*G*)(*γt*(*H*) − 1).

It is still necessary to prove that *γt*(*G* ◦*<sup>v</sup> H*) ≤ *γ*(*G*) + n(*G*)(*γt*(*H*) − 1). If *γ*(*G*) = *γt*(*G*), then we are done. Assume *γ*(*G*) < *γt*(*G*). Now we take a *γ*(*G*)-set *X* and for every *x* ∈ *V*(*G*) we define a set *Z<sup>x</sup>* ⊆ *V*(*Hx*) as follows. If *x* ∈ *X*, then *Z<sup>x</sup>* is a *γt*(*Hx*)-set such that *x* ∈ *Zx*, while if *<sup>x</sup>* ∈ *<sup>V</sup>*(*G*) \ *<sup>X</sup>*, then *<sup>Z</sup><sup>x</sup>* is a *<sup>γ</sup>t*(*H<sup>x</sup>* − {*x*})-set. Notice that *<sup>Z</sup>* = ∪*x*∈*V*(*G*)*Z<sup>x</sup>* is a TDS of *<sup>G</sup>* ◦*<sup>v</sup> <sup>H</sup>*. Therefore, *γt*(*G* ◦*<sup>v</sup> H*) ≤ |*Z*| = *γ*(*G*) + n(*G*)(*γt*(*H*) − 1), as required.

Next we proceed to characterize the graphs with *γt*(*G* ◦*<sup>v</sup> H*) = *γt*(*G*) + n(*G*)(*γt*(*H*) − 1). Notice that it is excluded the case *G* ∼= ∪*K*2. In such a case, *γt*(*G*) = n(*G*), and so *γt*(*G*) + n(*G*)(*γt*(*H*) − 1) = n(*G*)*γt*(*H*), which implies that the characterization of this particular case can be derived by elimination from Theorems 3 and 4. Analogously, the case *γ*(*G*) = *γt*(*G*) is excluded, as it was discusses in Theorem 4.

**Theorem 5.** *Let G* 6∼= ∪*K*<sup>2</sup> *and H be two graphs with no isolated vertex such that γ*(*G*) < *γt*(*G*)*, and let v* ∈ *V*(*H*)*. The following statements are equivalent.*

(i) *γt*(*G* ◦*<sup>v</sup> H*) = *γt*(*G*) + n(*G*)(*γt*(*H*) − 1).

(ii) *γt*(*H* − {*v*}) = *γt*(*H*) − 1 *and v* ∈/ *D for every γt*(*H*)*-set D.*

**Proof.** First, assume that (i) holds. Since, *G* 6∼= ∪*K*2, we have that *γt*(*G*) < n(*G*). Thus, by Lemma 6, *v* 6∈ S(*H*) and then by Lemma 5 we deduce that *γt*(*H* − {*v*}) ≤ *γt*(*H*) − 1 and Lemma 1 leads to *γt*(*H* − {*v*}) = *γt*(*H*) − 1.

Suppose that there exists a *γt*(*H*)-set containing *v*. Let *X* be a *γ*(*G*)-set. For every *x* ∈ *V*(*G*) we define a set *Z<sup>x</sup>* ⊆ *V*(*Hx*) as follows. If *x* ∈ *X*, then *Z<sup>x</sup>* is a *γt*(*Hx*)-set such that *x* ∈ *Zx*, while if *<sup>x</sup>* ∈ *<sup>V</sup>*(*G*) \ *<sup>X</sup>*, then *<sup>Z</sup><sup>x</sup>* is a *<sup>γ</sup>t*(*H<sup>x</sup>* − {*x*})-set. Notice that *<sup>Z</sup>* = ∪*x*∈*V*(*G*)*Z<sup>x</sup>* is a TDS of *<sup>G</sup>* ◦*<sup>v</sup> <sup>H</sup>*. Therefore, *γt*(*G* ◦*<sup>v</sup> H*) ≤ |*Z*| = *γ*(*G*) + n(*G*)(*γt*(*H*) − 1), which is a contradiction, as *γt*(*G*) > *γ*(*G*). Therefore, *v* 6∈ *D* for every *γt*(*H*)-set *D*, which implies that (ii) follows.

Conversely, assume that (ii) holds. Since *γt*(*H* − {*v*}) = *γt*(*H*) − 1, by Theorem 1 we have that *γt*(*G* ◦*<sup>v</sup> H*) ≤ *γt*(*G*) + n(*G*)(*γt*(*H*) − 1). Let *S* be a *γt*(*G* ◦*<sup>v</sup> H*)-set. If B*<sup>S</sup>* = ∅, then *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)*γt*(*H*), and so *γt*(*G*) = n(*G*), which is a contradiction, as *G* 6∼= ∪*K*2. Hence, from now on we assume that B*<sup>S</sup>* 6= ∅.

If there exists a vertex *x* ∈ B*<sup>S</sup>* ∩ *S*, then for any vertex *y* ∈ *N*(*v*) ∩ *V*(*Hx*), the set *S<sup>x</sup>* ∪ {*y*} is a *γt*(*Hx*)-set, which is a contradiction. Thus, B*<sup>S</sup>* ∩ *S* = ∅, and so by Lemma 3, A*<sup>S</sup>* is a dominating set of *G*. Moreover, by Lemma 4 and Theorem 2 we deduce that either *γt*(*G* ◦*<sup>v</sup> H*) = *γ*(*G*) + *n*(*G*)(*γt*(*H*) − 1) or *γt*(*G* ◦*<sup>v</sup> H*) = *γt*(*G*) + *n*(*G*)(*γt*(*H*) − 1). Now, let A*<sup>S</sup>* = *A* <sup>−</sup> ∪ *A* <sup>+</sup> where *<sup>x</sup>* <sup>∈</sup> *<sup>A</sup>* <sup>−</sup> if *x* ∈ A*<sup>S</sup>* and *N*(*x*) ∩ A*<sup>S</sup>* = ∅. Let *B* ⊆ B*<sup>S</sup>* such that |*B*| ≤ |*A* <sup>−</sup>| and *N*(*x*) ∩ *B* 6= ∅ for every *x* ∈ *A* −. Obviously, *B* ∪ *A* <sup>+</sup> is a total dominating set of *<sup>G</sup>*, and so *<sup>γ</sup>t*(*G*) + *<sup>n</sup>*(*G*)(*γt*(*H*) <sup>−</sup> <sup>1</sup>) ≤ |*<sup>B</sup>* <sup>∪</sup> *<sup>A</sup>* <sup>+</sup><sup>|</sup> <sup>+</sup> *<sup>n</sup>*(*G*)(*γt*(*H*) <sup>−</sup> 1) ≤ |A*S*| + *n*(*G*)(*γt*(*H*) − 1) ≤ *γt*(*G* ◦*<sup>v</sup> H*). Therefore, the result follows.

From Theorem 2 we learned that there are four possible expressions for *γt*(*G* ◦*<sup>v</sup> H*). In the case of the first three expressions, the graphs (and the root) reaching the equality were characterized in Theorems 3–5. In the case of the expression *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)*γt*(*H*), the corresponding characterization can be derived by elimination from the previous results, although it must be recognized that the formulation of such a characterization is somewhat cumbersome. To conclude this section, we will just give a couple of examples where this expression is obtained.

The following result shows an example where *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)*γt*(*H*), which covers the cases in which *v* is a neighbor of a support vertex, excluding the case where *v* is the only leaf adjacent to its support.

**Proposition 1.** *Let G and H be two graphs with no isolated vertex and v* ∈ *V*(*H*)*. If there exists u* ∈ *N*(*v*) *such that N*(*u*) ∩ (L(*H*) \ {*v*}) 6= ∅*, then*

$$
\gamma\_t(G \circ\_\upsilon H) = \mathbf{n}(G)\gamma\_t(H).
$$

**Proof.** Assume first that *v* 6∈ S(*H*). Let *D* be a *γt*(*H* − {*v*})-set. Since *u* ∈ S(*H* − {*v*}), we have that *u* ∈ *D*. Hence, *D* is a TDS of *H*, and so *γt*(*H* − {*v*}) = |*D*| ≥ *γt*(*H*). Therefore, Lemma 5 leads to *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)*γt*(*H*) or *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)(*γt*(*H*) − 1). Now, suppose that *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)(*γt*(*H*) − 1). Let *S* be a *γt*(*G* ◦*<sup>v</sup> H*)-set. By Lemma 3, B*<sup>S</sup>* = *V*(*G*) and *N*(*x*) ∩ *S<sup>x</sup>* = ∅ for every *x* ∈ B*S*, which is a contradiction, as *N*(*x*) ∩ S(*Hx*) 6= ∅ and S(*Hx*) ⊆ *Sx*. Therefore, *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)*γt*(*H*).

Now, if *v* ∈ S(*H*), then *u*, *v* ∈ S(*G* ◦*<sup>v</sup> H*). Hence, for every *γt*(*G* ◦*<sup>v</sup> H*)-set *S* and every vertex *x* ∈ *V*(*G*), we have that *S<sup>x</sup>* is a TDS of *Hx*. Thus, B*<sup>S</sup>* = ∅, which implies that *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)*γt*(*H*), as required.

We next consider another example where *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)*γt*(*H*).

**Proposition 2.** *Let G and H be two graphs with no isolated vertex and v* ∈ *V*(*H*) \ S(*H*)*. If γt*(*H* − {*v*}) ≥ *γt*(*H*) *and v does not belong to any γt*(*H*)*-set, then*

$$
\gamma\_t(G \circ\_v H) = \mathfrak{n}(G)\gamma\_t(H).
$$

**Proof.** If *γt*(*H* − {*v*}) ≥ *γt*(*H*), then by Lemma 5 we have that *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)*γt*(*H*) or *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)(*γt*(*H*) − 1). Now, assume that *v* does not belong to any *γt*(*H*)-set. If *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)(*γt*(*H*) − 1), then B*<sup>S</sup>* = *V*(*G*). Hence, by Lemma 4 (ii) there exists *x* ∈ B*<sup>S</sup>* ∩ *S*, which is a contradiction as from any *x* <sup>0</sup> ∈ *N*(*x*) ∩ *V*(*Hx*) the set *S<sup>x</sup>* ∪ {*x* <sup>0</sup>} is a *γt*(*Hx*)-set containing *x*. Therefore, *γt*(*G* ◦*<sup>v</sup> H*) = n(*G*)*γt*(*H*).

#### **2. An Observation on the Domination Number**

It was shown in [15] that there are two possibilities for the domination number of a rooted product graph. Since the graphs reaching these expressions have not been characterized, we consider that it is appropriate to derive a result in this direction. Specifically, we will provide a characterization in Theorem 7.

**Theorem 6.** [15] *For any nontrivial graphs G and H and any v* ∈ *V*(*H*)*,*

$$\gamma(G\circ\_{\upsilon}H)\in\{\mathfrak{n}(G)\gamma(H),\gamma(G)+\mathfrak{n}(G)(\gamma(H)-1)\}.$$

In order to derive our result, we need to introduce the following two lemmas.

**Lemma 7.** [21] *Let H be a graph. For any vertex v* ∈ *V*(*H*)*,*

$$
\gamma(H - \{v\}) \ge \gamma(H) - 1.
$$

**Lemma 8.** *For any γ*(*G* ◦*<sup>v</sup> H*)*-set D and any vertex x* ∈ *V*(*G*)*,*


*Furthermore, if* |*Dx*| = *γ*(*H*) − 1*, then N*[*x*] ∩ *D<sup>x</sup>* = ∅*.*

**Proof.** Let *x* ∈ *V*(*G*). Notice that every vertex in *V*(*Hx*) \ {*x*} is adjacent to some vertex in *Dx*. Since *D<sup>x</sup>* ∪ {*x*} is a dominating set of *Hx*, we have that *γ*(*H*) = *γ*(*Hx*) ≤ |*D<sup>x</sup>* ∪ {*x*}| ≤ |*Dx*| + 1, as required.

Now, assume that |*Dx*| = *γ*(*H*) − 1. If *N*[*x*] ∩ *D<sup>x</sup>* 6= ∅, then *D<sup>x</sup>* is a dominating set of *Hx*, which is a contradiction as |*Dx*| = *γ*(*Hx*) − 1. Therefore, the result follows.

**Theorem 7.** *For any pair of nontrivial graphs G and H, and any v* ∈ *V*(*H*)*,*

$$\gamma(G\circ\_{\upsilon}H) = \begin{cases} \gamma(G) + \mathfrak{n}(G)(\gamma(H) - 1) & \text{if } \gamma(H - \{\upsilon\}) = \gamma(H) - 1, \\\\ \mathfrak{n}(G)\gamma(H) & \text{otherwise.} \end{cases}$$

**Proof.** By Theorem 6 we only need to prove that *γ*(*G* ◦*<sup>v</sup> H*) = *γ*(*G*) + n(*G*)(*γ*(*H*) − 1) if and only if *γ*(*H* − {*v*}) = *γ*(*H*) − 1.

We first assume *γ*(*H* − {*v*}) = *γ*(*H*) − 1. Let *D* ⊆ *V*(*G* ◦*<sup>v</sup> H*) such that *D*<sup>−</sup> *<sup>x</sup>* = *D<sup>x</sup>* \ {*x*} is a *γ*(*H<sup>x</sup>* − {*x*})-set for every *x* ∈ *V*(*G*), and *D* ∩ *V*(*G*) is a *γ*(*G*)-set. It is readily seen that *D* is a dominating set of *<sup>G</sup>* ◦*<sup>v</sup> <sup>H</sup>*, which implies that *<sup>γ</sup>*(*<sup>G</sup>* ◦*<sup>v</sup> <sup>H</sup>*) ≤ |*D*| = *<sup>γ</sup>*(*G*) + <sup>∑</sup>*x*∈*V*(*G*) |*D*<sup>−</sup> *x* | = *γ*(*G*) + n(*G*)(*γ*(*H*) − 1), and by Theorem 6 we conclude that the equality holds.

Conversely, assume *γ*(*G* ◦*<sup>v</sup> H*) = *γ*(*G*) + n(*G*)(*γ*(*H*) − 1). Let *S* be a *γ*(*G* ◦*<sup>v</sup> H*)-set. Since |*S*| < n(*G*)*γ*(*H*), there exists *x* ∈ *V*(*G*) such that |*Sx*| < *γ*(*H*). Hence, by Lemma 8, |*Sx*| = *γ*(*H*) − 1 and *N*[*x*] ∩ *S<sup>x</sup>* = ∅. This implies that *S<sup>x</sup>* is a dominating set of *H<sup>x</sup>* − {*x*}, and so *γ*(*H* − {*v*}) = *γ*(*H<sup>x</sup>* − {*x*}) ≤ |*Sx*| = *γ*(*H*) − 1. By Lemma 7 we conclude that *γ*(*H* − {*v*}) = *γ*(*H*) − 1, which completes the proof.

**Author Contributions:** All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
