*Article* **Scheduling with Resource Allocation, Deteriorating Effect and Group Technology to Minimize Total Completion Time**

**Jia-Xuan Yan, Na Ren, Hong-Bin Bei, Han Bao and Ji-Bo Wang \***

School of Science, Shenyang Aerospace University, Shenyang 110136, China **\*** Correspondence: wangjibo@sau.edu.cn

**Abstract:** This paper studies a single-machine problem with resource allocation (*RA*) and deteriorating effect (*DE*). Under group technology (*GT*) and limited resource availability, our goal is to determine the schedules of groups and jobs within each group such that the total completion time is minimized. For three special cases, polynomial time algorithms are given. For a general case, a heuristic, a tabu search algorithm, and an exact (i.e., branch-and-bound) algorithm are proposed to solve this problem.

**Keywords:** resource allocation; group technology; deterioration effect; scheduling

**MSC:** 90B35

#### **1. Introduction**

With the development of economy, the research on the group technology (denoted by *GT*) problem involves a variety of fields, especially in the supply chain management, information processing, computer systems, and other industries (see Ham et al. [1], Wang et al. [2]). Yang [3] and Bai et al. [4] investigated single-machine *GT* scheduling with learning and deterioration effects. Lu et al. [5] studied the single-machine problem with *GT* and time-dependent processing times (i.e., time-dependent scheduling), i.e., the processing time of jobs and setup time of groups are time-dependent. For the makespan minimization subject to release dates, they presented a polynomial time algorithm. Wang et al. [6] examined the single-machine problem with *GT* and shortening job processing times. For the makespan minimization with ready times, they demonstrated that some special cases were optimally solved in polynomial time. Liu et al. [7] studied the singlemachine problem with *GT* and deterioration effects (denoted by *DE*), i.e., the processing time of jobs are time-dependent and setup time of groups are constants. For the makespan minimization with ready times, they proposed a branch-and-bound algorithm. Zhu et al. [8] discussed the single-machine problem with *GT*, resource allocation (denoted by *RA*), and learning effects. For the weighted sum minimization of makespan and total resource consumption, Zhu et al. [8] proved that the problem remains polynomially solvable. In 2018, Zhang et al. [9] discussed the single-machine problem with *GT* and position-dependent processing times. In 2020, Liao et al. [10] considered the two-competing scheduling problem with *GT* and learning effects. In 2021, Lv et al. [11] addressed single-machine slack due date assignment problems with *GT*, *RA*, and learning effects. In 2021, Xu et al. [12] investigated the single-machine problem with *GT*, nonperiodical maintenance, and *DE*. For the makespan minimization, they proposed some heuristic algorithms.

Recently, Oron [13] and Li and Wang [14] considered a single-machine scheduling model combining *RA* and *DE*. Later, Wang et al. [15] discussed a scheduling model combining *GT*, *RA*, and *DE*. Under the single-machine setting, the objective is to minimize the weighted sum of makespan and total resource consumption. Wang et al. [15] showed that some special cases remain polynomially solvable. In 2020, Liang et al. [16] considered the same model as Wang et al. [15] for the general case; they provided heuristic and

**Citation:** Yan, J.-X.; Ren, N.; Bei, H.-B.; Bao, H.; Wang, J.-B. Scheduling with Resource Allocation, Deteriorating Effect and Group Technology to Minimize Total Completion Time. *Mathematics* **2022**, *10*, 2983. https://doi.org/10.3390/ math10162983

Academic Editors: Zsolt Tibor Kosztyán and Zoltán Kovács

Received: 19 July 2022 Accepted: 16 August 2022 Published: 18 August 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

branch-and-bound algorithms. In 2019, Wang and Liang [17] studied the single-machine problem with *GT*, *RA*, and *DE* concurrently. For the makespan minimization under the constraint that total resource consumption cannot exceed an upper bound, they proved that some special cases remain polynomially solvable. For the general case, they provided heuristic and branch-and-bound algorithms.

This paper conducts a further study on the problem with *GT*, *RA*, and *DE*, but the objective cost is to minimize the total completion time under the constraint that total resource consumption cannot exceed an upper bound. For three special cases, polynomial algorithms are given. For the general case, upper and lower bounds of the problem are given, then the branch-and-bound algorithm is proposed. In addition, a tabu search algorithm and numerical simulation analysis are given.

The rest of this paper is organized as follows: Section 2 presents a formulation of the problem. Section 3 gives some basic properties. Section 4 studies some special cases. Section 5 considers the general case, and we propose some algorithms to solve this problem. Section 6 presents the numerical simulations. The conclusions are given in Section 7.

#### **2. Problem Statement**

The following notation (see Table 1) will be used throughout this paper. There are *<sup>n</sup>* independent jobs. In order to exploit *GT* in production (see Ji et al. [18]), all the jobs are classified into *<sup>m</sup>* (*<sup>m</sup>* <sup>≥</sup> <sup>2</sup>) groups (i.e., <sup>Ω</sup>1, <sup>Ω</sup>2, ... , <sup>Ω</sup>*<sup>m</sup>* ) in advance according to their processing similarities. All the jobs in the same group must be processed in succession on a single machine. Assume that the single machine and all jobs are available at time zero. Let ˘*Jhj* be the job *<sup>j</sup>* in group <sup>Ω</sup>*h*, and the number of jobs in group <sup>Ω</sup>*<sup>h</sup>* is *<sup>n</sup><sup>h</sup>*, i.e., <sup>∑</sup>*<sup>m</sup> <sup>h</sup>*=<sup>1</sup> *<sup>n</sup><sup>h</sup>* <sup>=</sup> *<sup>n</sup>*. The actual processing time of ˘*Jhj* is:

$$p\_{hj}^{Apt} = \left(\frac{\xi\_{hj}}{\tilde{r}\_{hj}}\right)^{\eta} + \theta t\_{\prime} \tag{1}$$

where *<sup>ς</sup>hj* (resp. *rhj* <sup>≥</sup> 0) is a workload (respective amount of resource) of ˘*Jhj*, *<sup>η</sup>* <sup>&</sup>gt; 0 is a constant, *θ* ≥ 0 is a common deterioration rate, and *t* ≥ 0 is its starting time. The actual setup time of Ω*<sup>h</sup>* is:

$$s\_h^{Apt} = \left(\frac{o\_h}{\widetilde{r}\_h}\right)^\eta + \mu t,\tag{2}$$

where *oh* (respectively, *rh*) is a workload (amount of resource) of <sup>Ω</sup>*h*, and *<sup>μ</sup>* <sup>≥</sup> 0 is a common deterioration rate. Obviously, the parameters *<sup>n</sup>*, *<sup>m</sup>*, *<sup>ς</sup>hj*, *<sup>n</sup><sup>h</sup>*, *oh*, *<sup>η</sup>*, *<sup>θ</sup>*, and *<sup>μ</sup>* are given in advance, and the resource allocation *rhj* and *rh* are decision variables. Our goal is to find the optimal group schedule *π*¯ ∗ <sup>Ω</sup>, job schedule *π*¯ <sup>∗</sup> *<sup>h</sup>* (*<sup>h</sup>* <sup>=</sup> 1, ··· , *<sup>m</sup>*) within <sup>Ω</sup>*h*, and resource allocation *<sup>R</sup>*<sup>∗</sup> (i.e., *rhj* and *rh*) such that a total completion time,

$$\widehat{\text{fct}}(\pi\_{\Omega}, \pi\_{\text{h}} | h = 1, \dots, \gamma, \widetilde{m}, R) = \sum\_{h=1}^{\overline{m}} \sum\_{j=1}^{\overline{n}\_h} \mathbb{C}\_{hj} \tag{3}$$

is minimized subject to ∑*<sup>m</sup> <sup>h</sup>*=<sup>1</sup> <sup>∑</sup>*<sup>n</sup><sup>h</sup> <sup>j</sup>*=<sup>1</sup> *rhj* <sup>≤</sup> *<sup>V</sup>*, <sup>∑</sup>*<sup>m</sup> <sup>h</sup>*=<sup>1</sup> *rh* <sup>≤</sup> *<sup>U</sup>*, where *<sup>V</sup>* and *<sup>U</sup>* are given constants (there is not any constraint between the *rhj* variables and the *rh* variables, and <sup>∑</sup>*<sup>m</sup> <sup>h</sup>*=<sup>1</sup> <sup>∑</sup>*<sup>n</sup><sup>h</sup> <sup>j</sup>*=<sup>1</sup> *rhj* and ∑*<sup>m</sup> <sup>h</sup>*=<sup>1</sup> *rh* are independent from each other). By using the three-field notation (see Gawiejnowicz [19]), the problem can be denoted by

$$1\left|p\_{hj}^{Apt} = \left(\frac{\varsigma\_{hj}}{\widetilde{r}\_{hj}}\right)^{\eta} + \theta t\_{\prime} s\_{h}^{Apt} = \left(\frac{o\_h}{\widetilde{r}\_h}\right)^{\eta} + \mu t\_{\prime} \sum\_{h=1}^{\overline{m}} \sum\_{j=1}^{\overline{n}\_h} \widetilde{r}\_{hj} \leq \widetilde{V}\_{\prime} \sum\_{h=1}^{\overline{m}} \widetilde{r}\_{h} \leq \breve{U}\_{\prime} G T \left| \widetilde{t\_{\overline{c}t\_{\prime}}} \right|$$

where 1 denotes the single machine, the middle field is the job and group characteristics, and *tct* <sup>3</sup> is the objective function (this problem is abbreviated as *Ptct* <sup>3</sup>) . Wang et al. [15] and Liang et al. [16] considered the problem

$$-1\left|p\_{h\overline{j}}^{Apt} = \left(\frac{\mathfrak{c}\_{h\overline{j}}}{\widetilde{r}\_{h\overline{j}}}\right)^{\eta} + \theta t, s\_{h}^{Apt} = \left(\frac{\mathfrak{o}\_{h}}{\widetilde{r}\_{h}}\right)^{\eta} + \mu t, GT\left|a\_{1} \times \mathbb{C}\_{\max} + a\_{2} \sum\_{h=1}^{\overline{m}} \sum\_{j=1}^{\overline{n}\_{h}} \widetilde{r}\_{hj} + a\_{3} \sum\_{h=1}^{\overline{m}} \widetilde{r}\_{h\ast}\right|^{\eta}$$

where *<sup>α</sup><sup>l</sup>* <sup>≥</sup> 0 (*<sup>l</sup>* <sup>=</sup> 1, 2, 3) is a given constant and *<sup>C</sup>*max <sup>=</sup> max{*C*¯ *hj*|*<sup>h</sup>* <sup>=</sup> 1, ... , *<sup>m</sup>*; *<sup>j</sup>* <sup>=</sup> 1, . . . , *<sup>n</sup><sup>h</sup>*}. Wang and Liang [17] studied the problem

$$1 \cdot \left| p\_{hj}^{Apt} = \left(\frac{\varsigma\_{hj}}{\widetilde{r}\_{hj}}\right)^{\eta} + \theta t\_{\prime} s\_{h}^{Apt} = \left(\frac{o\_{h}}{\widetilde{r}\_{h}}\right)^{\eta} + \mu t\_{\prime} \sum\_{h=1}^{\overline{m}} \sum\_{j=1}^{\overline{n}\_{h}} \widetilde{r}\_{hj} \leq \breve{V}\_{\prime} \sum\_{h=1}^{\overline{m}} \widetilde{r}\_{h} \leq \breve{U}\_{\prime} \, G \, T \Big| \mathbf{C}\_{\text{max}} \cdot \mathbf{S}\_{\text{max}} \leq \breve{V}\_{\prime} \mathbf{C}\_{\text{max}} \, G \, T \leq \breve{V}\_{\prime} \mathbf{C}\_{\text{max}}$$

**Table 1.** Symbols.


#### **3. Basic Results**

For a given schedule Π, stemming from Wang et al. [15] and Liang et al. [16], by a mathematical induction, we have

*C*¯ [1][1] = *o*[1] *r*[1] *<sup>η</sup>* + *ς*[1][1] *<sup>r</sup>*[1][1] *<sup>η</sup>* + *θ o*[1] *r*[1] *<sup>η</sup>* = *ς*[1][1] *<sup>r</sup>*[1][1] *<sup>η</sup>* + (1 + *θ*) *o*[1] *r*[1] *<sup>η</sup>* , *C*¯ [1][2] = *C*¯ [1][1] + *ς*[1][2] *<sup>r</sup>*[1][2] *<sup>η</sup>* + *θC*¯ [1][1] = *ς*[1][2] *<sup>r</sup>*[1][2] *<sup>η</sup>* + (1 + *θ*) *ς*[1][1] *<sup>r</sup>*[1][1] *<sup>η</sup>* + (1 + *θ*) 2 *o*[1] *r*[1] *<sup>η</sup>* , . . . *C*¯ [1][*<sup>n</sup>*<sup>1</sup>] <sup>=</sup> *n*[1] ∑ *j*=1 (1 + *θ*) *<sup>n</sup>*[1] −*j ς*[1][*j*] *<sup>r</sup>*[1][*j*] *<sup>η</sup>* + (1 + *θ*) *n*[1] *o*[1] *r*[1] *<sup>η</sup>* , *C*¯ [2][1] = *C*¯ [1][*<sup>n</sup>*<sup>1</sup>] <sup>+</sup> *o*[2] *r*[2] *<sup>η</sup>* + *μC*¯ [1][*<sup>n</sup>*<sup>1</sup>] <sup>+</sup> *ς*[2][1] *<sup>r</sup>*[2][1] *<sup>η</sup>* + *θ C*¯ [1][*<sup>n</sup>*<sup>1</sup>] <sup>+</sup> *o*[2] *r*[2] *<sup>η</sup>* + *μC*¯ [1][*<sup>n</sup>*<sup>1</sup>] = *n*[1] ∑ *j*=1 (1 + *μ*)(1 + *θ*) *<sup>n</sup>*[1] <sup>−</sup>*j*+<sup>1</sup> *ς*[1][*j*] *<sup>r</sup>*[1][*j*] *<sup>η</sup>* + (1 + *μ*)(1 + *θ*) *<sup>n</sup>*[1]+<sup>1</sup> *o*[1] *r*[1] *<sup>η</sup>* +(1 + *θ*) *o*[2] *r*[2] *<sup>η</sup>* + *ς*[2][1] *<sup>r</sup>*[2][1] *<sup>η</sup>* , *C*¯ [2][2] = *C*¯ [2][1] + *ς*[2][2] *<sup>r</sup>*[2][2] *<sup>η</sup>* + *θC*¯ [2][1] = *n*[1] ∑ *j*=1 (1 + *μ*)(1 + *θ*) *<sup>n</sup>*[1] −*j*+2 *ς*[1][*j*] *<sup>r</sup>*[1][*j*] *<sup>η</sup>* + (1 + *μ*)(1 + *θ*) *<sup>n</sup>*[1]+<sup>2</sup> *o*[1] *r*[1] *<sup>η</sup>* +(1 + *θ*) 2 *o*[2] *r*[2] *<sup>η</sup>* + (1 + *θ*) *ς*[2][1] *<sup>r</sup>*[2][1] *<sup>η</sup>* + *ς*[2][2] *<sup>r</sup>*[2][2] *<sup>η</sup>* , . . . *C*¯ [2][*<sup>n</sup>*<sup>2</sup>] <sup>=</sup> *n*[1] ∑ *j*=1 (1 + *μ*)(1 + *θ*) *<sup>n</sup>*[1]+*<sup>n</sup>*[2]−*<sup>j</sup> ς*[1][*j*] *<sup>r</sup>*[1][*j*] *<sup>η</sup>* + (1 + *μ*)(1 + *θ*) *<sup>n</sup>*[1]+*<sup>n</sup>*[2] *o*[1] *r*[1] *<sup>η</sup>* + *n*[2] ∑ *j*=1 (1 + *θ*) *<sup>n</sup>*[2]−*<sup>j</sup> ς*[2][*j*] *<sup>r</sup>*[2][*j*] *<sup>η</sup>* + (1 + *θ*) *n*[2] *o*[2] *r*[2] *<sup>η</sup>* , . . . *C*¯ [*<sup>m</sup>*][*<sup>n</sup><sup>m</sup>*] <sup>=</sup> *m* ∑ *h*=1 *n*[*h*] ∑ *j*=1 (<sup>1</sup> + *<sup>μ</sup>*)*<sup>m</sup>*<sup>−</sup>*h*(<sup>1</sup> + *<sup>θ</sup>*)∑*<sup>m</sup> <sup>l</sup>*=*<sup>h</sup> <sup>n</sup>*[*l*]−*<sup>j</sup> ς*[*h*][*j*] *<sup>r</sup>*[*h*][*j*] *<sup>η</sup>* + *m* ∑ *h*=1 (<sup>1</sup> + *<sup>μ</sup>*)*<sup>m</sup>*<sup>−</sup>*h*(<sup>1</sup> + *<sup>θ</sup>*)∑*<sup>m</sup> <sup>l</sup>*=*<sup>h</sup> <sup>n</sup>*[*l*] *o*[*h*] *r*[*h*] *<sup>η</sup>* .

According to the above equations, we have

$$
\begin{array}{ll}
\frac{\tilde{\mathbf{t}}\tilde{\mathbf{t}}}{\tilde{\mathbf{t}}} &=& \sum\_{i=1}^{\tilde{\mathbf{m}}} \sum\_{j=1}^{\tilde{\mathbf{n}}\_{k}} \mathbb{C}\_{[i][j]} \\
&=& \sum\_{h=1}^{\tilde{\mathbf{m}}} \sum\_{j=1}^{\tilde{\mathbf{n}}\_{k}} \left[ \sum\_{l=j}^{\tilde{\mathbf{n}}\_{[1]}} (1+\theta)^{l-j} + \sum\_{k=h+1}^{\tilde{\mathbf{m}}} \left(1+\mu\right)^{k-h} \sum\_{l=1}^{\tilde{\mathbf{n}}\_{[1]}} (1+\theta)^{l-j-\overline{n}\_{[1]}+\sum\_{l=h}^{k} \tilde{\mathbf{n}}\_{[l]}} \right] \left( \frac{\xi\_{[1][j]}}{\tilde{\mathbf{r}}\_{[l][j]}} \right)^{\eta} \\
&+ \sum\_{h=1}^{\tilde{\mathbf{m}}} \left[ \sum\_{k=h}^{\tilde{\mathbf{m}}} \left(1+\mu\right)^{k-h} \sum\_{l=1}^{\tilde{\mathbf{n}}\_{[1]}} (1+\theta)^{l} + \sum\_{l=h}^{l-\overline{n}\_{[1]}+\sum\_{l=h}^{k} \tilde{\mathbf{n}}\_{[l]}} \right] \left( \frac{\sigma\_{[1]}}{\tilde{\mathbf{r}}\_{[1]}} \right)^{\eta}.
\end{array} \tag{4}
$$

**Lemma 1.** *For a given schedule* <sup>Π</sup> *of Ptct* <sup>3</sup>*, the optimal resource allocation <sup>R</sup>*∗(*π*¯ <sup>Ω</sup>, *<sup>π</sup>*¯ *<sup>h</sup>*|*<sup>h</sup>* <sup>=</sup> 1, ··· , *<sup>m</sup>*)*is*

$$
\hat{\tau}\_{[h][j]}^{\*} = \frac{\left[\sum\_{l=j}^{\overline{n}\_{[k]}} (1+\theta)^{l-j} + \sum\_{k=h+1}^{\overline{m}} (1+\mu)^{k-h} \sum\_{l=1}^{\overline{n}\_{[k]}} (1+\theta)^{l-j-\overline{n}\_{[k]} + \sum\_{l=k}^{k} \overline{n}\_{[\overline{k}]}} (\xi\_{[h][j]})^{\overline{l}}\right]^{\frac{1}{\overline{n}+1}}}{\sum\_{k=1}^{\overline{n}\_{[k]}} \sum\_{l=1}^{\overline{n}\_{[k]}} (1+\theta)^{l-j} + \sum\_{k=h+1}^{\overline{m}} (1+\mu)^{k-h} \sum\_{l=1}^{\overline{n}\_{[k]}} (1+\theta)^{l-j-\overline{n}\_{[k]} + \sum\_{l=k}^{k} \overline{n}\_{[\overline{k}]}} (\xi\_{[h][j]})^{\overline{l}}\right]^{\frac{1}{\overline{n}+1}}} \times \hat{V} \tag{5}
$$
 
$$
\text{for } h = 1, \cdots, \tilde{m}; j = 1, \cdots, \tilde{n}\_{i} \text{ and}
$$

$$\hat{r}\_{[h]}^{\*} = \frac{\left[\sum\_{k=h}^{\overline{m}} \left(1 + \mu\right)^{k-h} \sum\_{l=1}^{\overline{n}\_{[k]}} \left(1 + \theta\right)^{l-\overline{n}\_{[k]} + \sum\_{l=h}^{k} \overline{n}\_{[\overline{k}]}} (o\_{[h]})^{\eta}\right]^{\frac{1}{\eta+1}}}{\sum\_{h=1}^{\overline{m}} \left[\sum\_{k=h}^{\overline{m}} \left(1 + \mu\right)^{k-h} \sum\_{l=1}^{\overline{n}\_{[k]}} \left(1 + \theta\right)^{l-\overline{n}\_{[k]} + \sum\_{l=h}^{k} \overline{n}\_{[\overline{k}]}} (o\_{[h]})^{\eta}\right]^{\frac{1}{\eta+1}}} \times \hat{\Omega} \tag{6}$$

for *<sup>h</sup>* <sup>=</sup> 1, ··· , *<sup>m</sup>*.

**Proof.** Obviously, Equation (4) is a convex function with respect to *<sup>r</sup>*[*h*][*j*] and *<sup>r</sup>*[*h*]. It is obvious that in the optimal solution all resources should be consumed, i.e., ∑*<sup>m</sup> <sup>h</sup>*=<sup>1</sup> <sup>∑</sup>*<sup>n</sup><sup>h</sup> <sup>j</sup>*=<sup>1</sup>*<sup>r</sup>*[*h*][*j*] <sup>−</sup> *<sup>V</sup>* <sup>=</sup> 0 and <sup>∑</sup>*<sup>m</sup> <sup>h</sup>*=<sup>1</sup> *<sup>r</sup>*[*h*] <sup>−</sup> *<sup>U</sup>* <sup>=</sup> 0. As in Wang and Liang [17], Shabtay and Kaspi [20], and Wang and Wang [21], for a given schedule, the optimal resource allocation of the problem *Ptct* <sup>3</sup> can be solved by the Lagrange multiplier method. The Lagrangian function is

*Q*(*κ*, *υ*, *R*) = ∑*<sup>m</sup> <sup>h</sup>*=<sup>1</sup> <sup>∑</sup>*<sup>n</sup><sup>h</sup> <sup>j</sup>*=<sup>1</sup> *<sup>C</sup>*¯ [*h*][*j*] + *κ* ∑*<sup>m</sup> <sup>h</sup>*=<sup>1</sup> <sup>∑</sup>*<sup>n</sup><sup>h</sup> <sup>j</sup>*=<sup>1</sup>*<sup>r</sup>*[*h*][*j*] <sup>−</sup> *<sup>V</sup>* + *υ* ∑*<sup>m</sup> <sup>h</sup>*=<sup>1</sup> *<sup>r</sup>*[*h*] <sup>−</sup> *<sup>U</sup>* <sup>=</sup> *<sup>m</sup>* ∑ *h*=1 *nh* ∑ *j*=1 ⎡ ⎣ *n*[*h*] ∑ *l*=*j* (1 + *θ*) *l*−*j* <sup>+</sup> *<sup>m</sup>* ∑ *k*=*h*+1 (1 + *μ*) *k*−*h n*[*k*] ∑ *l*=1 (1 + *θ*) *<sup>l</sup>*−*j*−*<sup>n</sup>*[*k*]+ *<sup>k</sup>* ∑ *ξ*=*h n*[*ξ*] ⎤ ⎦ *ς*[*h*][*j*] *<sup>r</sup>*[*h*][*j*] *η* <sup>+</sup> *<sup>m</sup>* ∑ *h*=1 ⎡ ⎣ *m* ∑ *k*=*h* (1 + *μ*) *k*−*h n*[*k*] ∑ *l*=1 (1 + *θ*) *<sup>l</sup>*−*<sup>n</sup>*[*k*]+ *<sup>k</sup>* ∑ *ξ*=*h n*[*ξ*] ⎤ ⎦ *o*[*h*] *r*[*h*] *η* + *κ* ∑*<sup>m</sup> <sup>h</sup>*=<sup>1</sup> <sup>∑</sup>*<sup>n</sup> <sup>j</sup>*=<sup>1</sup>*<sup>r</sup>*[*h*][*j*] <sup>−</sup> *<sup>V</sup>* + *υ* ∑*<sup>m</sup> <sup>h</sup>*=<sup>1</sup> *<sup>r</sup>*[*h*] <sup>−</sup> *<sup>U</sup>* , (7)

where *κ* ≥ 0 and *υ* ≥ 0 are the Lagrangian multipliers. Differentiating Equation (7) with respect to *<sup>r</sup>*[*h*][*j*] and *<sup>κ</sup>*, then

$$\begin{array}{rcl}\frac{\partial Q(\boldsymbol{\kappa},\boldsymbol{\nu},\boldsymbol{R})}{\partial \tilde{r}\_{[\boldsymbol{l}][\boldsymbol{j}]}} &=& \boldsymbol{\delta} - \eta \left[ \sum\_{l=j}^{\tilde{n}\_{[\boldsymbol{l}]}} (\mathbf{1}+\boldsymbol{\theta})^{l-j} + \sum\_{k=h+1}^{\tilde{m}} (\mathbf{1}+\boldsymbol{\mu})^{k-h} \sum\_{l=1}^{\tilde{n}\_{[\boldsymbol{k}]}} (\mathbf{1}+\boldsymbol{\theta})^{l-j-\tilde{n}\_{[\boldsymbol{k}]}+\sum\_{\tilde{\zeta}=\tilde{k}}^{\tilde{k}} \tilde{n}\_{[\boldsymbol{\xi}]}} \right] \\ & \times \underbrace{\left(\xi\_{[\boldsymbol{l}][\boldsymbol{j}]}\right)^{\eta}}\_{=\boldsymbol{\varepsilon}} \\ &=& \boldsymbol{0} \end{array} \tag{8}$$

and

$$\frac{\partial Q(\kappa, \upsilon, R)}{\partial \kappa} = \sum\_{h=1}^{\overline{m}} \sum\_{j=1}^{\overline{n}\_h} \widetilde{r}\_{[h][j]} - \widetilde{V} = 0. \tag{9}$$

By using Equations (8) and (9), it follows that

$$\widetilde{r}\_{[h][j]} = \left[ \frac{\eta \left( \sum\_{l=j}^{\overline{n}\_{[h]}} (1+\theta)^{l-j} + \sum\_{k=h+1}^{\overline{m}} (1+\mu)^{k-h} \sum\_{l=1}^{\overline{n}\_{[l]}} (1+\theta)^{l-j-\overline{n}\_{[l]} + \sum\_{l=h}^{k} \overline{n}\_{[l]}} \right)}{\kappa} \right]^{\frac{1}{\binom{\overline{n}\_{[1]}}{h+1}}} \times \left( \zeta\_{[h][j]} \right)^{\frac{\overline{n}}{\overline{\gamma}+1}} \tag{10}$$

$$\kappa^{\frac{1}{q+1}} = \frac{\sum\_{h=1}^{\overline{m}} \sum\_{j=1}^{\overline{n}\_k} \left[ \eta \left( \sum\_{l=j}^{\overline{n}\_{[k]}} (1+\theta)^{l-j} + \sum\_{k=h+1}^{\overline{m}} (1+\mu)^{k-j} \sum\_{l=1}^{\overline{n}\_{[k]}} (1+\theta)^{l-j-\overline{n}\_{[k]} + \sum\_{l=\overline{n}}^{k} \overline{n}\_{[\overline{k}]}} \right) \left( \mathbb{J}\_{[h][j]} \right)^q \right]^{\frac{1}{q+1}}}{\%} \tag{11}$$

From Equations (10) and (11), then

$$\widehat{r}\_{[h][j]}^{\*} = \underbrace{\begin{bmatrix} \frac{\overline{n}\_{[k]}}{\sum\limits\_{l=j}^{\overline{n}} \left(1+\theta\right)^{l-j} + \sum\limits\_{k=\overline{h}+1}^{\overline{m}} \left(1+\mu\right)^{k-h} \sum\limits\_{l=1}^{\overline{n}\_{[k]}} \left(1+\theta\right)^{l-j-\overline{n}\_{[k]} + \sum\limits\_{l=\overline{h}}^{k} \overline{n}\_{[\overline{k}]}} \left(\xi\_{[h][j]}\right)^{\eta} \\ \frac{\overline{m}\_{\overline{h}}}{\sum\limits\_{l=1}^{\overline{m}} \left[\sum\limits\_{l=j}^{\overline{n}\_{[k]}} \left(1+\theta\right)^{l-j} + \sum\limits\_{k=\overline{h}+1}^{\overline{m}} \left(1+\mu\right)^{k-h} \sum\limits\_{l=1}^{\overline{n}\_{[k]}} \left(1+\theta\right)^{l-j-\overline{n}\_{[k]} + \sum\limits\_{l=\overline{h}}^{\overline{n}} \overline{n}\_{[\overline{k}]}} \left(\xi\_{[h][j]}\right)^{\eta} \end{bmatrix}^{\frac{1}{\overline{\eta}+1}} \times \tilde{V} \dots \tilde{V}$$

Similarly, Equation (6) can be obtained.

By Lemma 1, substituting Equations (5) and (6) into *tct* <sup>3</sup> <sup>=</sup> <sup>∑</sup>*<sup>m</sup> <sup>h</sup>*=<sup>1</sup> <sup>∑</sup>*<sup>n</sup><sup>h</sup> <sup>j</sup>*=<sup>1</sup> *<sup>C</sup>*¯ *hj*, we have

*tct* <sup>3</sup> <sup>=</sup> *<sup>m</sup>* ∑ *h*=1 *nh* ∑ *j*=1 ⎡ ⎣ *n*[*h*] ∑ *l*=*j* (1 + *θ*) *l*−*j* <sup>+</sup> *<sup>m</sup>* ∑ *k*=*h*+1 (1 + *μ*) *k*−*h n*[*k*] ∑ *l*=1 (1 + *θ*) *<sup>l</sup>*−*j*−*<sup>n</sup>*[*k*]+ *<sup>k</sup>* ∑ *ξ*=*h n*[*ξ*] ⎤ ⎦ *ς*[*h*][*j*] *<sup>r</sup>*[*h*][*j*] *η* <sup>+</sup> *<sup>m</sup>* ∑ *h*=1 ⎡ ⎣ *m* ∑ *k*=*h* (1 + *μ*) *k*−*h n*[*k*] ∑ *l*=1 (1 + *θ*) *<sup>l</sup>*−*<sup>n</sup>*[*k*]+ *<sup>k</sup>* ∑ *ξ*=*h n*[*ξ*] ⎤ ⎦ *o*[*h*] *r*[*h*] *η* = *<sup>V</sup>*<sup>−</sup>*<sup>η</sup>* ⎛ ⎜⎜⎝ *m* ∑ *h*=1 *nh* ∑ *j*=1 ⎛ ⎝ *n*[*h*] ∑ *l*=*j* (1 + *θ*) *l*−*j* <sup>+</sup> *<sup>m</sup>* ∑ *k*=*h*+1 (1 + *μ*) *k*−*h n*[*k*] ∑ *l*=1 (1 + *θ*) *<sup>l</sup>*−*j*−*<sup>n</sup>*[*k*]+ *<sup>k</sup>* ∑ *ξ*=*h n*[*ξ*] ⎞ ⎠ 1 *η*+1 (*ς*[*h*][*j*]) *η η*+1 ⎞ ⎟⎟⎠ *η*+1 +*<sup>U</sup>*<sup>−</sup>*<sup>η</sup>* ⎛ ⎜⎜⎝ *m* ∑ *h*=1 ⎛ ⎝ *m* ∑ *k*=*h* (1 + *μ*) *k*−*h n*[*k*] ∑ *l*=1 (1 + *θ*) *<sup>l</sup>*−*<sup>n</sup>*[*k*]+ *<sup>k</sup>* ∑ *ξ*=*h n*[*ξ*] ⎞ ⎠ 1 *η*+1 (*o*[*h*]) *η η*+1 ⎞ ⎟⎟⎠ *η*+1 . (12)

**Lemma 2.** *For Ptct* <sup>3</sup>*, the optimal job schedule <sup>π</sup>*¯ <sup>∗</sup> *<sup>h</sup> within group* <sup>Ω</sup>*h*(*<sup>h</sup>* <sup>=</sup> 1, ··· , *<sup>m</sup>*) *is the nondecreasing order of <sup>ζ</sup>hj, i.e., <sup>ζ</sup>h*1 <sup>≤</sup> *<sup>ζ</sup>h*2 ≤··· *<sup>ζ</sup>h<sup>n</sup><sup>h</sup>.*

**Proof.** From Equation (12), for group Ω[*h*], the objective cost is:

$$\sum\_{j=1}^{\overline{n}\_k} \left( \sum\_{l=j}^{\overline{n}\_{[k]}} (\mathbf{1} + \boldsymbol{\theta})^{l-j} + \sum\_{k=h+1}^{\overline{m}} (\mathbf{1} + \boldsymbol{\mu})^{k-h} \sum\_{l=1}^{\overline{n}\_{[k]}} (\mathbf{1} + \boldsymbol{\theta})^{l-j-\overline{n}\_{[k]} + \sum\_{l=h}^{k} \overline{n}\_{[l]}} \right)^{\frac{1}{\overline{\eta}+1}} (\boldsymbol{\xi}\_{[h][j]})^{\frac{\overline{\eta}}{\overline{\eta}+1}} = \sum\_{j=1}^{\overline{n}\_{k}} \mathbf{x}\_{[h][j]} y\_{[h][j]} \cdot \mathbf{y}\_{j}$$

$$\begin{array}{rcl} \textbf{where} \quad \textbf{x}\_{[h][j]} &=& \begin{pmatrix} \overline{\boldsymbol{n}}\_{[h]} \\ \sum\_{l=j}^{\overline{\boldsymbol{n}}\_{[h]}} \left(1+\theta\right)^{l-j} + \sum\_{k=h+1}^{\overline{\boldsymbol{m}}} \left(1+\mu\right)^{k-h} \sum\_{l=1}^{\overline{\boldsymbol{n}}\_{[k]}} \left(1+\theta\right)^{l-j-\overline{\boldsymbol{n}}\_{[k]}+\sum\_{l=h}^{k} \overline{\boldsymbol{n}}\_{[l]}} \end{pmatrix}^{\overline{\boldsymbol{n}}+1} \textbf{and} \end{array}$$

1

.

*y*[*h*][*j*] = (*ς*[*h*][*j*]) *η <sup>η</sup>*+<sup>1</sup> . The term *x*[*h*][*j*] is a monotonically decreasing function of *j*, by the HLP rule (Hardy et al. [22], i.e., the term *<sup>n</sup><sup>h</sup>* ∑ *j*=1 *x*[*h*][*j*]*y*[*h*][*j*] is minimized if sequence *x*[*h*][1], *x*[*h*][2], ... , *<sup>x</sup>*[*h*][*<sup>n</sup><sup>h</sup>*] is ordered non-decreasingly and sequence *<sup>y</sup>*[*h*][1], *<sup>y</sup>*[*h*][2], ... , *<sup>y</sup>*[*h*][*<sup>n</sup><sup>h</sup>*] is ordered nonincreasingly or vice versa), for the group <sup>Ω</sup>[*h*](*<sup>h</sup>* <sup>=</sup> 1, ··· , *<sup>m</sup>*), if *<sup>ζ</sup>hj* is a non-decreasing order, i.e., *<sup>ζ</sup>h*1 <sup>≤</sup> *<sup>ζ</sup>h*2 ≤··· *<sup>ζ</sup>h<sup>n</sup><sup>h</sup>*, the result can be obtained.

#### **4. Special Cases**

By Lemma 2, for group Ω*h*, the optimal schedule *π*¯ <sup>∗</sup> *<sup>h</sup>* is the non-decreasing order of *<sup>ζ</sup>hj*, i.e., *<sup>ζ</sup>h*1 <sup>≤</sup> *<sup>ζ</sup>h*2 ≤··· *<sup>ζ</sup>h<sup>n</sup><sup>h</sup>*. From Equation (12), let

$$X = \hat{\mathcal{U}}^{-\eta} \left( \sum\_{h=1}^{\bar{m}} \left( \sum\_{k=h}^{\bar{m}} (1+\mu)^{k-h} \sum\_{l=1}^{\bar{n}\_{[k]}} (1+\theta)^{l - \bar{n}\_{[k]} + \sum\_{\substack{l=k\\l\neq n\_{[l]}}}^{k} \bar{n}\_{[\bar{\xi}]}} \right)^{\frac{1}{\eta+1}} (o\_{[h]})^{\frac{\eta}{\eta+1}} \right)^{\eta+1}$$

and

$$Y = \tilde{V}^{-\eta} \left( \sum\_{h=1}^{\tilde{m}} \sum\_{j=1}^{\tilde{n}\_h} \left( \sum\_{l=j}^{\tilde{n}\_{[h]}} (1+\theta)^{l-j} + \sum\_{k=h+1}^{\tilde{m}} (1+\mu)^{k-h} \sum\_{l=1}^{\tilde{n}\_{[k]}} (1+\theta)^{l-j-\tilde{n}\_{[k]}+\sum\_{l=k}^{k} \tilde{n}\_{[l]}} \right)^{\frac{1}{\eta+1}} \left( \mathfrak{c}\_{[h][j]} \right)^{\eta+1} \right)^{\eta+1}$$

In this section, we study some special cases (i.e., the cases of parameters *ςhj*, *oh*, and *<sup>n</sup><sup>h</sup>* have some relationship, then *<sup>X</sup>* (*Y*) is minimized or a constant) which can be solved in polynomial time. The special cases stemmingfrom the parameters *<sup>ς</sup>hj*, *oh*, and *<sup>n</sup><sup>h</sup>* have some relationship.

#### *4.1. Case 1*

If *oh* <sup>=</sup> *<sup>o</sup>* and *<sup>n</sup><sup>h</sup>* <sup>=</sup> *<sup>n</sup> <sup>m</sup>* <sup>=</sup> *<sup>n</sup>*¨ (*<sup>h</sup>* <sup>=</sup> 1, ··· , *<sup>m</sup>*), from Equation (12), it follows that

$$\begin{aligned} \hat{\boldsymbol{U}}^{-\eta} & \left( \sum\_{h=1}^{\bar{m}} \left( \sum\_{k=h}^{\bar{m}} \left( 1 + \mu \right)^{k-h} \sum\_{l=1}^{\bar{\boldsymbol{\pi}}\_{[k]}} \left( 1 + \theta \right)^{l - \bar{\boldsymbol{\pi}}\_{[k]} + \sum\_{\substack{l=h\\l \neq \bar{m}}}^{k} \bar{\boldsymbol{\pi}}\_{[\bar{\boldsymbol{\xi}}]} \right)^{\frac{1}{\eta + 1}} \left( \boldsymbol{o}\_{[h]} \right)^{\frac{\eta}{\eta + 1}} \right)^{\eta + 1} \\ & = \quad \check{\boldsymbol{U}}^{-\eta} \boldsymbol{o}^{\eta} \left( \sum\_{h=1}^{\bar{m}} \left( \sum\_{k=h}^{\bar{m}} \left( 1 + \mu \right)^{k-h} \sum\_{l=1}^{\bar{m}} \left( 1 + \theta \right)^{l + (k-h)\bar{m}} \right)^{\frac{1}{\eta + 1}} \right)^{\eta + 1} \end{aligned}$$

is a constant (i.e., *<sup>U</sup>*, *<sup>o</sup>*, *<sup>η</sup>*, *<sup>μ</sup>*, *<sup>θ</sup>*, *<sup>n</sup>*¨, and *<sup>m</sup>* are given constants, and this term is independent of these parameters). Let

$$\mathcal{Y}\_{\hbar\rho} = \begin{cases} 1, & \text{if } \Omega\_{\hbar} \text{ is assigned to } \rho \text{th position} \\ 0, & \text{otherwise} \end{cases} \tag{13}$$

and

$$\Theta\_{h\rho} = \sum\_{j=1}^{n} \left( \sum\_{l=j}^{n} (1+\theta)^{l-j} + \sum\_{k=\rho+1}^{\overline{m}} (1+\mu)^{k-\rho} \sum\_{l=1}^{n} (1+\theta)^{l-j+(k-\rho)n} \right)^{\frac{1}{\overline{\rho}+1}} (\varsigma\_{h\left(\bar{j}\right)})^{\frac{\overline{\rho}}{\overline{\rho}+1}}.\tag{14}$$

The optimal group schedule can be translated into the following assignment problem:

$$\text{Min} \qquad \sum\_{h=1}^{\bar{m}} \sum\_{\rho=1}^{\bar{m}} \Theta\_{h\rho} \mathcal{Y}\_{h\rho} \tag{15}$$

$$\text{s.t.} \qquad \sum\_{\rho=1}^{\bar{m}} \mathbf{Y}\_{h\rho} = \mathbf{1}, h = 1, \dots, \hat{m}\_{\prime} \tag{16}$$

$$\sum\_{h=1}^{\bar{m}} \mathbf{Y}\_{h\rho} = \mathbf{1}, \rho = 1, \dots, \hat{m}, \tag{17}$$

$$\Upsilon\_{\mathbf{h}\rho} = 0 \text{ or } 1, h, \rho = 1, \dots, \tilde{m}. \tag{18}$$

Thus, for the special case *oh* <sup>=</sup> *<sup>o</sup>* and *<sup>n</sup><sup>h</sup>* <sup>=</sup> *<sup>n</sup> <sup>m</sup>* <sup>=</sup> *<sup>n</sup>*¨ (*<sup>h</sup>* <sup>=</sup> 1, ··· , *<sup>m</sup>*), the problem *Ptct* <sup>3</sup> can be solved by:

**Theorem 1.** *If oh* <sup>=</sup> *<sup>o</sup> and <sup>n</sup><sup>h</sup>* <sup>=</sup> *<sup>n</sup> <sup>m</sup>* <sup>=</sup> *<sup>n</sup>*¨ (*<sup>h</sup>* <sup>=</sup> 1, ··· , *<sup>m</sup>*)*, Ptct* <sup>3</sup> *is solvable by Algorithm <sup>1</sup> in O n*3 *time.*

**Algorithm 1:** Case 1

*Step 1.* For group <sup>Ω</sup>*<sup>h</sup>* (*<sup>h</sup>* <sup>=</sup> 1, . . . , *<sup>m</sup>*), optimal job schedule *<sup>π</sup>*¯ <sup>∗</sup> *<sup>h</sup>* can be determined by Lemma 2, i.e., *<sup>ς</sup>h*1 <sup>≤</sup> *<sup>ς</sup>h*2 ≤··· *<sup>ς</sup>h<sup>n</sup><sup>h</sup>*. *Step 2.* Calculate <sup>Θ</sup>*h<sup>ρ</sup>* (*h*, *<sup>ρ</sup>* <sup>=</sup> 1, . . . , *<sup>m</sup>*), and determine optimal group schedule *<sup>π</sup>*¯ <sup>∗</sup> Ω by using Equations (15)–(18). *Step 3.* Optimal resource allocations *<sup>r</sup>*<sup>∗</sup> *hj* and *<sup>r</sup>*<sup>∗</sup> *<sup>h</sup>* are calculated by Equations (5) and (6) (see Lemma 1).

**Proof.** Time of Step 1 is *O*(∑*<sup>m</sup> <sup>h</sup>*=1(*<sup>n</sup><sup>h</sup>* log *<sup>n</sup><sup>h</sup>*)) <sup>≤</sup> *<sup>O</sup>*(*<sup>n</sup>* log *<sup>n</sup>*). Steps 3 needs *<sup>O</sup>*(*<sup>n</sup>*) time. For an assignment problem, Step 2 needs *O m*3 ≤ *O n*3 time. Thus, the total time is *O n*3 .

*4.2. Case 2*

$$\text{If } \emptyset\_{h\bar{j}} = \emptyset \text{ and } \tilde{n}\_h = \frac{\bar{n}}{\bar{m}} = \vec{n}, h = 1, \cdot, \cdot, \cdot, \tilde{m}, \mathfrak{j} = 1, 2, \cdot, \cdot, \tilde{n}\_{h\prime} \text{ we have: }$$

**Lemma 3.** *For Ptct* <sup>3</sup>*, if <sup>ς</sup>hj* <sup>=</sup> *<sup>ς</sup> and <sup>n</sup><sup>h</sup>* <sup>=</sup> *<sup>n</sup> <sup>m</sup>* <sup>=</sup> *<sup>n</sup>*¨ (*<sup>h</sup>* <sup>=</sup> 1, ··· , *<sup>m</sup>*; *<sup>j</sup>* <sup>=</sup> 1, ··· , *<sup>n</sup><sup>h</sup>*)*, then the optimal group schedule π*¯ ∗ <sup>Ω</sup> *is the non-decreasing order of oh, i.e., o*(1) <sup>≤</sup> *<sup>o</sup>*(2) <sup>≤</sup> ... <sup>≤</sup> *<sup>o</sup>*(*<sup>m</sup>*)*.*

**Proof.** From Equation (12), if *<sup>ς</sup>hj* <sup>=</sup> *<sup>ς</sup>* and *<sup>n</sup><sup>h</sup>* <sup>=</sup> *<sup>n</sup> <sup>m</sup>* <sup>=</sup> *<sup>n</sup>*¨,

$$\begin{split} &\tilde{V}^{-\eta} \left( \sum\_{h=1}^{\bar{m}} \sum\_{j=1}^{\bar{m}\_h} \left( \sum\_{l=j}^{\bar{\eta}\_{[\bar{m}]}} (1+\theta)^{l-j} + \sum\_{k=k+1}^{\bar{m}} (1+\mu)^{k-h} \sum\_{l=1}^{\bar{\eta}\_{[\bar{k}]}} (1+\theta)^{l-j-\bar{n}\_{[\bar{k}]}+\sum\_{l=k}^{k} \bar{n}\_{[\bar{k}]}} \right)^{\frac{1}{\eta+1}} \left( \xi\_{[\bar{\eta}[\bar{l}]]} \right)^{\eta+1} \\ &= \quad \tilde{V}^{-\eta} \, \_{\xi} \tilde{v}^{\eta} \left( \sum\_{h=1}^{\bar{m}} \sum\_{j=1}^{\bar{n}} \left( \sum\_{l=j}^{\bar{n}} (1+\theta)^{l-j} + \sum\_{k=k+1}^{\bar{\eta}} (1+\mu)^{k-h} \sum\_{l=1}^{\bar{n}} (1+\theta)^{l-j+(k-h)\eta} \right)^{\frac{1}{\eta+1}} \right)^{\eta+1} \end{split}$$

is a constant (i.e., *<sup>V</sup>*, *<sup>ς</sup>*, *<sup>η</sup>*, *<sup>μ</sup>*, *<sup>θ</sup>*, *<sup>n</sup>*¨, and *<sup>m</sup>* are given constants, and this term is independent of these parameters).

From Equation (12) and the above analysis, it can be proved that minimizing *tct* 3 is equal to minimizing the following expression:

$$\begin{split} &\mathcal{U}^{-\eta}\left(\sum\_{h=1}^{\tilde{m}}\left(\sum\_{k=h}^{\tilde{m}}(1+\mu)^{k-h}\sum\_{l=1}^{\tilde{n}\_{[k]}}(1+\theta)^{l-\tilde{n}\_{[k]}+\sum\_{\tilde{\zeta}=\tilde{n}}^{\tilde{k}}\tilde{n}\_{[\tilde{\zeta}]}}\right)^{\frac{1}{\eta+1}}(o\_{[\boldsymbol{h}]})^{\frac{\eta}{\eta+1}}\right)^{\eta+1} \\ &=&\mathcal{U}^{-\eta}\left(\sum\_{h=1}^{\tilde{m}}\left(\sum\_{k=h}^{\tilde{m}}(1+\mu)^{k-h}\sum\_{l=1}^{\tilde{n}}(1+\theta)^{l}\right)^{l-\eta+\sum\_{\tilde{\zeta}=\tilde{n}}^{\tilde{n}}\tilde{n}\_{[\tilde{\zeta}]}}\right)^{\frac{1}{\eta+1}}(o\_{[\boldsymbol{h}]})^{\frac{\eta}{\eta+1}}\right)^{\eta+1} \\ &=&\mathcal{U}^{-\eta}\left(\sum\_{h=1}^{\tilde{m}}\left(\sum\_{k=h}^{\tilde{m}}(1+\mu)^{k-h}\sum\_{l=1}^{\tilde{n}}(1+\theta)^{l+(k-h)\tilde{n}}\right)^{\frac{1}{\eta+1}}(o\_{[\boldsymbol{h}]})^{\frac{\eta}{\eta+1}}\right)^{\eta+1}.\end{split} \tag{19}$$

Similar to Lemma 2, *<sup>m</sup>* ∑ *k*=*h* (1 + *μ*) *<sup>k</sup>*−*<sup>h</sup> <sup>n</sup>*¨ ∑ *l*=1 (1 + *θ*) *l*+(*k*−*h*)*n*¨ <sup>1</sup> *η*+1 is a monotonically decreasing function of *h*, and by the HLP rule (Hardy et al. [22]), Equation (19) can be minimized by arranging groups in the non-decreasing order of *oh*; this completes the proof.

Thus, for the special case *<sup>ς</sup>hj* <sup>=</sup> *<sup>ς</sup>* and *<sup>n</sup><sup>h</sup>* <sup>=</sup> *<sup>n</sup> <sup>m</sup>* <sup>=</sup> *<sup>n</sup>*¨ (*<sup>i</sup>* <sup>=</sup> 1, ··· , *<sup>m</sup>*; *<sup>j</sup>* <sup>=</sup> 1, ··· , *<sup>n</sup><sup>h</sup>*), the problem *Ptct* <sup>3</sup> can be solved by:

**Theorem 2.** *If <sup>ς</sup>hj* <sup>=</sup> *<sup>ς</sup> and <sup>n</sup><sup>h</sup>* <sup>=</sup> *<sup>n</sup> <sup>m</sup>* <sup>=</sup> *<sup>n</sup>*¨ (*<sup>h</sup>* <sup>=</sup> 1, ··· , *<sup>m</sup>*)*, Ptct* <sup>3</sup> *is solvable by Algorithm <sup>2</sup> in <sup>O</sup>*(*<sup>n</sup>* log *<sup>n</sup>*) *time.*

#### **Algorithm 2:** Case 2

*Step 1.* For group <sup>Ω</sup>*<sup>h</sup>* (*<sup>h</sup>* <sup>=</sup> 1, ··· , *<sup>m</sup>*), optimal job schedule can be obtained in any order. *Step 2.* Optimal group schedule *π*∗ <sup>Ω</sup> is the non-decreasing order of *oh*. *Step 3.* Optimal resource allocations *<sup>r</sup>*<sup>∗</sup> *hj* and *<sup>r</sup>*<sup>∗</sup> *<sup>h</sup>* are calculated by Equations (5) and (6) (see Lemma 1).

#### *4.3. Case 3*

For any groups <sup>Ω</sup>*<sup>x</sup>* and <sup>Ω</sup>*y*, if *ox* <sup>≤</sup> *oy* implies *<sup>n</sup><sup>x</sup>* <sup>≥</sup> *<sup>n</sup><sup>y</sup>*, we have:

**Lemma 4.** *For any groups* <sup>Ω</sup>*<sup>x</sup> and* <sup>Ω</sup>*<sup>y</sup> of Ptct* <sup>3</sup>*, if ox* <sup>≤</sup> *oy implies <sup>n</sup><sup>x</sup>* <sup>≥</sup> *<sup>n</sup>y, the optimal group schedule π*¯ ∗ <sup>Ω</sup> *is non-decreasing order of oh.*

**Proof.** Similar to the proof of Liang et al. [6] (see Equation (12)).

For this special case, i.e., for any groups <sup>Ω</sup>*<sup>x</sup>* and <sup>Ω</sup>*y*, if *ox* <sup>≤</sup> *oy* implies *<sup>n</sup><sup>x</sup>* <sup>≥</sup> *<sup>n</sup><sup>y</sup>*, *Ptct* <sup>3</sup> can be solved by:

**Theorem 3.** *For any groups* <sup>Ω</sup>*<sup>x</sup> and* <sup>Ω</sup>*y, if ox* <sup>≤</sup> *oy implies <sup>n</sup><sup>x</sup>* <sup>≥</sup> *<sup>n</sup>y, Ptct* <sup>3</sup> *is solvable by Algorithm <sup>3</sup> in O*(*<sup>n</sup>* log *<sup>n</sup>*) *time.*


#### **5. A General Case**

For *Ptct* <sup>3</sup>, we cannot find a polynomially optimal algorithm, and the complexity of determining the optimal group schedule is still an open problem; we conjecture that this problem is NP-hard. Thus, *B*&*B* (i.e., branch-and-bound, where we need a lower bound and a upper bound) and heuristic algorithms might be a good way to solve *Ptct* <sup>3</sup>.

#### *5.1. Upper Bound*

For the *tct* 3 minimization, any feasible solution can be proposed as a upper bound (denoted by *UB*). Similar to Section 3, the group sorting method can be used as the heuristic and then this solution is improved by using the pairwise interchange method.

For a better comparison, an alternative or complementary to Algorithm 4 is proposed,

a tabu search (denoted by <sup>4567</sup> *ts* ) algorithm (i.e., Algorithm 5) can be used to solve *Ptct* <sup>3</sup>.

#### **Algorithm 4:** Upper Bound

*Step 1.* For group <sup>Ω</sup>*<sup>h</sup>* (*<sup>h</sup>* <sup>=</sup> 1, ··· , *<sup>m</sup>*), an internal optimal job schedule *<sup>π</sup>*<sup>∗</sup> *h* (Lemma 2) is: *<sup>ς</sup>h*1 <sup>≤</sup> *<sup>ς</sup>h*2 ≤··· *<sup>ς</sup>h<sup>n</sup><sup>h</sup>*. *Step 2.* Groups are scheduled by the non-decreasing order of *oh*, i.e., *<sup>o</sup>*(1) <sup>≤</sup> *<sup>o</sup>*(2) <sup>≤</sup> ... <sup>≤</sup> *<sup>o</sup>*(*<sup>m</sup>*). *Step 3.* Groups are scheduled by the non-increasing order of *<sup>n</sup><sup>h</sup>*, i.e., *<sup>n</sup>*<<1>> <sup>≥</sup> *<sup>n</sup>*<<2>> ≥···≥ *<sup>n</sup>*<<*<sup>m</sup>*>>. *Step 4.* From Steps 2 and 3, the smallest value *tct* 3 (see Equation (12)) is selected as an original group schedule *π*¯ <sup>Ω</sup>. *Step 5.* Set *k* = 1. *Step 6.* Set *s* = *k* + 1. *Step 7.* The new group schedule can be obtained by exchanging the *k*th and *s*th groups (denoted as *π*¯ ∗ <sup>Ω</sup>), and when *tct* 3 of *π*¯ <sup>∗</sup> <sup>Ω</sup> is smaller than *π*¯ <sup>Ω</sup>, *π*¯ <sup>Ω</sup> is updated by *π*¯ <sup>∗</sup> Ω. *Step 8.* If *<sup>s</sup>* <sup>&</sup>lt; *<sup>m</sup>*, then set *<sup>s</sup>* <sup>=</sup> *<sup>s</sup>* <sup>+</sup> 1, go to step 7. *Step 9.* If *<sup>k</sup>* <sup>&</sup>lt; *<sup>m</sup>* <sup>−</sup> 1, then set *<sup>k</sup>* <sup>=</sup> *<sup>k</sup>* <sup>+</sup> 1, go to step 6; otherwise, STOP. Output the group schedule *π*¯ ∗ <sup>Ω</sup> of the best group schedule found by the heuristic algorithm and its objective value *tct* 3. *Step 10.* According to Lemma 1, calculate the resource allocation by Equations (5) and (6).

#### **Algorithm 5:** <sup>4567</sup> *ts*

*Step 1.* For group <sup>Ω</sup>*h*(*<sup>h</sup>* <sup>=</sup> 1, ··· , *<sup>m</sup>*), an internal optimal job schedule *<sup>π</sup>*<sup>∗</sup> *<sup>h</sup>* can be obtained by Lemma 2, i.e., *<sup>ς</sup>h*1 <sup>≤</sup> *<sup>ς</sup>h*2 ≤··· *<sup>ς</sup>h<sup>n</sup><sup>h</sup>*.

*Step 2.* Let the tabu list be empty and the iteration number be zero.

*Step 3.* Choose an initial group schedule by the Steps 2–4 of Algorithm 4, calculate its value *tct* 3 (see Equation (12)) and set the current group schedule as the best solution *π*¯ ∗ Ω.

*Step 4.* Search the associated neighborhood of the current group schedule and resolve if there is a group schedule *π*¯ ∗∗ <sup>Ω</sup> with the smallest objective value in associated neighborhood and it is not in the tabu list, where the neighborhood is generated by the random exchange of any two groups.

*Step 5.* If *tct* 3(*π*¯ ∗∗ <sup>Ω</sup> ) < *tct* 3(*π*¯ <sup>∗</sup> <sup>Ω</sup>), then let *π*¯ <sup>∗</sup> <sup>Ω</sup>= *π*¯ ∗∗ <sup>Ω</sup> . Update the tabu list and the iteration number.

*Step 6.* If there is not a group schedule in associated neighborhood but it is not in the tabu list or the maximum number of iterations is reached, output the local optimal group schedule *π*¯ <sup>Ω</sup> and *tct* 3(*π*¯ <sup>Ω</sup>). Otherwise, update tabu list and go to Step 4. *Step 7.* According to Lemma 1, calculate the resource allocation by Equations (5) and (6).

#### *5.2. Lower Bound*

Let *π*¯ <sup>Ω</sup> = *π*¯ <sup>Ω</sup>*p*, *π*¯ <sup>Ω</sup>*<sup>u</sup>* be a group schedule, where *π*¯ <sup>Ω</sup>*<sup>p</sup>* (respectively *π*¯ <sup>Ω</sup>*u*) is the scheduled (respectively unscheduled) part, and there are *r* groups in *π*¯ <sup>Ω</sup>*p*. From Equation (12) and Lemma 4, the lower bound (denoted by *LB*) of *Ptct* <sup>3</sup> is

$$\begin{split} \mathcal{U}\_{\text{B}} &= \begin{pmatrix} \sum\_{k=1}^{\tilde{n}\_{k}} \left( 1 + \theta \right)^{l-j} + \sum\_{k=k+1}^{\tilde{l}} \left( 1 + \theta^{k-\tilde{n}} \sum\_{l=1}^{\tilde{n}\_{l}} \left( 1 + \theta \right)^{l-\tilde{n}\_{l}+\tilde{n}\_{l}+\sum\_{k=\tilde{n}\_{l}}^{\tilde{n}\_{k}} \tilde{n}\_{k} \right) \\ & \sum\_{k=\tilde{n}\_{l}+1}^{\tilde{n}\_{l}} \left( 1 + \theta^{k-\tilde{n}} \sum\_{l=1}^{\tilde{n}\_{l}} \left( 1 + \theta \right)^{l-\tilde{n}\_{l}+\tilde{n}\_{l}+\sum\_{k=\tilde{n}\_{l}}^{\tilde{n}\_{k}} \tilde{n}\_{k} + \sum\_{k=\tilde{n}\_{l}+\tilde{n}\_{l}}^{\tilde{n}\_{k}} \tilde{n}\_{k} \right) \\ & + \sum\_{k=\tilde{n}\_{l}+1}^{\tilde{n}\_{l}} \left( \sum\_{k=1}^{\tilde{n}\_{l}} \left( 1 + \theta \right)^{l-j} + \\ & \sum\_{k=\tilde{n}\_{l}+1}^{\tilde{n}\_{l}} \left( 1 + \theta \right)^{l-\tilde{n}\_{l}+\tilde{n}\_{l}+\sum\_{k=\tilde{n}\_{l}}^{\tilde{n}\_{l}} \left( 1 + \theta \right)^{l-\tilde{n}\_{l}+\tilde{n}\_{l}+\sum\_{k=\tilde{n}\_{l}}^{\tilde{n}\_{l}} \tilde{n}\_{k}} \right) \end{pmatrix}^{\tilde{n}+1} \begin{pmatrix} \tau^{0} \\ & (\mathsf{g}\_{[0|\tilde{\boldsymbol{\gamma}}\})^{\frac{\tilde{\$$

where *<sup>ς</sup>h*1 <sup>≤</sup> *<sup>ς</sup>h*2 ≤··· *<sup>ς</sup>h<sup>n</sup><sup>h</sup>*, *<sup>o</sup>*(*r*+1) <sup>≤</sup> *<sup>o</sup>*(*r*+2) <sup>≤</sup> ... <sup>≤</sup> *<sup>o</sup>*(*<sup>m</sup>*) and *<sup>n</sup>*<<*r*+1>> <sup>≥</sup> *<sup>n</sup>*<<*r*+2>> <sup>≥</sup> ···≥ *<sup>n</sup>*<<*<sup>m</sup>*>> (remark: *<sup>o</sup>*(*h*) and *<sup>n</sup>*<<*h*>> (*<sup>h</sup>* <sup>=</sup> *<sup>r</sup>* <sup>+</sup> 1, ... , *<sup>m</sup>*) do not necessarily correspond to identical group).

From the *UB* (see Algorithm 4) and *LB* (see Equation (20)), a standardized *B*&*B* algorithm can be given.

#### **6. Computational Result**

A series of computational experiments were performed to evaluate the effectiveness

of the *UB*, *<sup>B</sup>*&*B*, and <sup>4567</sup> *ts* algorithms, and the <sup>4567</sup> *ts* algorithm was terminated after 2000 iterations. The proposed algorithms were coded in the C++ language and performed on a desktop computer with CPUInter®Corei5-10500 3.10 GHz, 8 GB RAM on Windows® 10 operating system. The following parameters were randomly generated: *ζhj* is uniformly distributed in [1, 100]; *oh* is uniformly distributed in [1, 50]; *θ* and *μ* are uniformly distributed in (0, 0.5), (0.5, 1); *<sup>U</sup>* <sup>=</sup> *<sup>V</sup>* <sup>=</sup> 500; *<sup>n</sup>* <sup>=</sup> 100, 150, 200, 250, 300; *<sup>m</sup>* <sup>=</sup> 12, 13, 14, 15, 16 (at least one job per group); *<sup>η</sup>* <sup>=</sup> 2. For each combination (*<sup>n</sup>*, *<sup>m</sup>*, and *<sup>θ</sup>*(*μ*)), there were 10 randomly generated replicas and the maximum *cpu* <sup>8</sup> time for each instance was set to 3600 s. For the *<sup>B</sup>*&*<sup>B</sup>* algorithm, average and maximum *cpu* <sup>8</sup> time (in seconds), and average and maximum

node numbers were given. The error bound of *UB* and <sup>4567</sup> *ts* algorithms is given by:

$$\frac{\overline{tct}(Y) - \overline{tct}(Opt)}{\overline{tct}(Opt)}\,\prime$$

where *Y* ∈ {*UB*, 4567 *ts* }, *tct* 3(*Y*) is a value *tct* 3 by *Y*, and *tct* 3(*Opt*) is an optimal value by a *B*&*B* algorithm. The computational results are given in Tables 2 and 3. From Tables 2 and 3, it is easy to see that the *B*&*B* can solve up to 300 jobs in a reasonable amount of time, and *UB* performs very well compared to <sup>4567</sup> *ts* in terms of error bound. When *<sup>n</sup>* <sup>≤</sup> 300, the maximum error bound is less than 0.001559 (i.e., relative error ≤ 0.1559%).

*Mathematics* **2022**, *10*, 2983

**Table 2.** Results of algorithms for *θ*, *μ* ∼ (0,

 0.5).



**Table 3.** Results of algorithms for *θ*, *μ* ∼ (0.5,

 1).


#### **7. Conclusions**

This paper investigated the group problem with deterioration effects and resource allocation. The goal was to determine *π*¯ ∗ <sup>Ω</sup>, *π*¯ <sup>∗</sup> *<sup>h</sup>* (*<sup>h</sup>* <sup>=</sup> 1, ··· , *<sup>m</sup>*) in <sup>Ω</sup>*<sup>h</sup>* and *<sup>R</sup>*<sup>∗</sup> such that *tct* <sup>3</sup> is minimized under ∑*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> <sup>∑</sup>*<sup>n</sup><sup>h</sup> <sup>j</sup>*=<sup>1</sup> *rij* <sup>≤</sup> *<sup>V</sup>* and <sup>∑</sup>*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> *ri* <sup>≤</sup> *<sup>U</sup>*. For some special cases, we demonstrated that this problem remains polynomially solvable. For the general case, we proposed some algorithms to solve this problem. As a future extension, it is interesting to deal with group scheduling with two scenarios based on processing times (see Wu et al. [23]) and delivery times (see Qian and Zhan [24]).

**Author Contributions:** Conceptualization, J.-X.Y. and H.-B.B.; methodology, J.-X.Y. and N.R.; software, J.-X.Y., H.-B.B. and H.B.; formal analysis, J.-X.Y. and J.-B.W.; investigation, J.-B.W.; writing—original draft preparation, J.-X.Y. and J.-B.W.; writing—review and editing, J.-X.Y. and J.-B.W. All authors have read and agreed to the published version of the manuscript.

**Funding:** This Work was supported by LiaoNing Revitalization Talents Program (Grant No. XLYC2002017) and Natural Science Foundation of LiaoNing Province, China (Grant No. 2020-MS-233).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data used to support this paper are available from the corresponding author upon request.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Multipurpose Aggregation in Risk Assessment**

**Zoltán Kovács 1,\*, Tibor Csizmadia 2, István Mihálcz 3,4 and Zsolt T. Kosztyán <sup>3</sup>**


**Abstract:** Risk-mitigation decisions in risk-management systems are usually based on complex risk indicators. Therefore, aggregation is an important step during risk assessment. Aggregation is important when determining the risk of components or the overall risk of different areas or organizational levels. In this article, the authors identify different aggregation scenarios. They summarize the requirements of aggregation functions and characterize different aggregations according to these requirements. They critique the multiplication-based risk priority number (RPN) used in existing applications and propose the use of other functions in different aggregation scenarios. The behavior of certain aggregation functions in warning systems is also examined. The authors find that, depending on the aggregation location within the organization and the purpose of the aggregation, considerably more functions can be used to develop complex risk indicators. The authors use different aggregations and seriation and biclustering to develop a method for generating corrective and preventive actions. The paper provides contributions for individuals, organizations, and or policy makers to assess and mitigate the risks at all levels of the enterprise.

**Keywords:** risk assessment; flexibility; multilevel structure

**MSC:** 91B05

#### **1. Introduction**

Risk aggregation plays an important role in various risk-assessment processes [1,2]. Risks can be aggregated for several purposes. It can happen at the lowest level of the systems (processes, products) during the calculation of a complex indicator from the factors. The overall risk value of certain areas can be formed, but risk can also be aggregated along the organizational hierarchy. In the following, we present a novel methodology of aggregation that can be used for different purposes. Aggregation can be considered a method for combining a list of numerical values into a single representative value [3,4]. Traditionally, the risk value is calculated based on a fixed number of risk components. Failure mode and effect analysis (FMEA), which is a widely used risk-assessment method, includes three risk components: the occurrence (O), detectability (D), and severity (S) [5–7]. Various methods that increase the number of risk components have been introduced in the literature. The use of four risk components was proposed by Karasan et al. [8] and Maheswaran and Loganathan [9], and Ouédraogo et al. [10] and Yousefi et al. [11] used five risk components. In contrast to a fixed number of components, Bognár and Heged ˝us [12] developed the partial risk map (PRISM) method, which flexibly considers only the FMEA components that are actually needed in the risk-assessment process. The total risk evaluation framework (TREF) method generalizes this idea and can flexibly handle an arbitrary number of risk components [13].

**Citation:** Kovács, Z.; Csizmadia, T.; Mihálcz, I.; Kosztyán, Z.T. Multipurpose Aggregation in Risk Assessment. *Mathematics* **2021**, *10*, 3166. https://doi.org/10.3390/ math10173166

Academic Editor: Constantin Zopounidis

Received: 1 August 2022 Accepted: 27 August 2022 Published: 2 September 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

In addition, various methods and analyses for aggregating risk components have been proposed, such as the *vIsekriterijumska optimizacija i kompromisno resenje* (VIKOR) method [14,15], the technique for order preference by similarity to the ideal solution (TOPSIS) method [16,17], the elimination and choice expressing the reality (ELECTRE) method [18,19], the evaluation based on the distance from the average solution (EDAS) method [20,21], the preference ranking organization method for enrichment evaluations (PROMETHEE) method [22,23], the Gray relational analysis (GRA) method [24,25], the MULTIMOORA method [26,27], the TODIM (Portuguese acronym for interactive multi-criteria decision making) method [28,29], and the sum of ranking differences (SRD) method [30,31]. These methods use different perspectives and various procedures to aggregate the values of distinct risk components into a single representative risk value.

Conventional risk management systems evaluate risk by calculating the risk priority number (RPN) as an aggregated risk indicator.

Risk indicators can be aggregated further through additional steps. These aggregations can be performed along the hierarchy of the organization, the hierarchy of the processes, or other logical operations.

In terms of aggregation, a common feature of the methods is that these methods provide aggregated values at only one level. The TREF method [13] and the new FMEA [32] consider two levels: the risk-component level and the aggregated value level. No existing methods can handle more than two levels; however, in practice, there are often more than two aggregation levels, and different types of corrective/preventive actions may be needed at the risk component level and the aggregated value level.

Moreover, one of the main constraints of existing methods is that these approaches do not consider risks in different levels of the process hierarchy. However, corrective/preventive actions can be prescribed at each hierarchy level, and different corrective/preventive actions may be needed at various process hierarchy levels. In summary, because the relationships between the process hierarchy levels (causes and effects across levels) are not addressed by existing methods, flexible, total system-level risk assessments have not yet been addressed. There is no work in the literature that deals with the multilevel case in general, as it is presented in this paper. Filipovi´c [33] dealt with the multilevel case, but the domain was limited to the insurance area and the standard (Solvecy II) solution. Bjørnsen and Aven [2] provide a good summary of the general issue of aggregation; however, they do not deal with corrective and preventive actions [2]. They have presented different (oil and gas industry, stock investment, national, societal) cases.

In general, it can be concluded that none of the publications in the literature deals with the general approach as it is described in this paper. The most frequently missing components are as follows.


Motivated by the above analyses and literature reviews, we highlight the contributions of this study to existing risk-assessment methods as follows:

*C*<sup>1</sup> A multilevel framework known as the enterprise-level matrix (ELM), which consists of three matrices, is proposed to evaluate risk at different enterprise levels. The three matrices are the risk-level matrix (RLM), the threshold-level matrix (TLM), and the action-level matrix (ALM).


The remainder of this paper is structured as follows. Section 2 introduces the preliminary details and the requirements and characterizations of the aggregation functions. Section 3 demonstrates a practical example of the proposed approach. Section 4 summarizes the paper.

#### **2. Preliminaries**

We use the following terminology throughout this work.

*Risk component:* the input of the aggregation. The risk components can be primary data, such as the occurrence, severity, and detection, which are often called factors. (The term "factor" refers to the most commonly used aggregation method: multiplication.) The components can also be aggregated values, such as vertical risk aggregation in an organization. This case is the mean of the RPNs of a product, process, or organization.

*Aggregated value:* the result of the aggregation. The aggregated value is typically a scalar value; however, it can also be a vector, such as when the risk cannot be characterized by one number.

#### *2.1. The Set of Enterprise-Level Matrices (ELM)*

This study proposes three multilevel matrices: the risk-level matrix (RLM), thresholdlevel matrix (TLM), and action-level matrix (ALM). These matrices are all multidimensional matrices, with the columns representing the risk components and their aggregations at all levels and the rows representing the process components and their aggregations at all levels. The risk-level matrix (RLM) specifies the risk values of all risk and process components. For all risk values (i.e., for each cell) in the RLM, a threshold value is specified in the thresholdlevel matrix. The threshold-level matrix includes specific thresholds for all risk values; however, a generic threshold can also be specified for all process and risk components. A corrective/preventive action occurs if a risk value is greater than or equal to the specific threshold value. The action-level matrix contains the specific corrective/preventive actions for mitigating the risk values; these actions can be specific for the given process and risk component or generic for each process and risk component.

The proposed set of multilevel matrices, denoted as the enterprise-level matrix (ELM), helps decision-makers evaluate and assess risk at all levels of the enterprise. In addition, data-mining methods, such as seriation and biclustering, are used to select the set of corrective/preventive tasks.

#### 2.1.1. Risk-Level Matrix

Table 1 specifies the structure of the hierarchical risk-evaluation matrix, hereafter denoted as the risk-level matrix (RLM), where the columns specify the risk components and the rows specify the process components. The rows and columns can both be aggregated; therefore, the aggregation level can be specified for both the rows, such as process component ⇒ process ⇒ process area ⇒ ... ⇒ enterprise-level process, and the columns, such as risk component ⇒ aspect ⇒ ... ⇒ enterprise-level risk component.

**Definition 1.** *Denote <sup>I</sup> (J) as the aggregation level of a row (column). Denote* **<sup>R</sup>***I*,*<sup>J</sup>* <sup>∈</sup> <sup>R</sup>(*nI*×*mJ*) <sup>+</sup> *as an nI x mj risk-level matrix, where nI (mJ) is the number of rows (columns) in aggregation level I (J).*

**Definition 2.** *Let* **R***I*,*<sup>J</sup> be a risk-level matrix and denote rI*,*J*(*i*, *j*) *as the risk value of risk component j* = 1, 2, ... , *mJ of process component i* = 1, 2, ... , *nI in process level I and factor level J. Denote rI*,*J*(*i*, ·) *as the set of risk components (in process level I and factor level J); rI*,*J*(·, *j*) *as the set of processes in process level I and factor level J; rI*,·(*i*, *jI*,·) *as the set of factor levels; and r*·,*J*(*i*, *j*) *as the set of process levels (I* = 1, 2, . . . , *N*, *J* = 1, 2, . . . , *M).*

*The elements of the next level of the RLM can be calculated as follows:*

$$(r\_{I+1,l}(i,j) = S\_I(r\_{I,l}(\cdot,j), \mathbf{v}) \tag{1}$$

$$r\_{I,I+1}(i,j) = S\_I(r\_{I,I}(i, \cdot), \mathbf{w}) \tag{2}$$

*where SI and SJ are at least monotonous aggregation functions and* **v** *and* **w** *are weight vectors.*

Table 1 shows a risk-level matrix with two risk components, two process components, two factor levels, and two process levels.

**Table 1.** The structure of a risk-level matrix.


**Example 1.** *Following the structure of this multilevel matrix, arbitrary factor and process levels and arbitrary numbers of risk and process components can be specified. For example, in the case of the traditional FMEA method, let I be an arbitrary process level and J be an arbitrary factor level. Suppose that the FMEA can be calculated at process level I and factor level J. In this case, we have mJ* = 3*, namely, the severity (S), occurrence (O), and detection (D). Suppose* ∀*i* ∈ {1, 2, ... , *nI*} *and* ∀*j* ∈ {1, 2, . . . , *mJ*}, *vi* = *wj* := 1*, rI*,*J*(*i*, *j*) ∈ {1, 2, . . . , 10}*, i* := 1, . . . , *n; then,*

$$r\_{I,J+1}(i,j) = \prod\_{j=1}^{m\_l} r\_{I,J}(i,j) \tag{3}$$

$$r\_{I+1,I}(i,j) = \prod\_{i=1}^{n\_I} r\_{I,I}(i,j),\tag{4}$$

*where rI*,*J*+1(*i*, *j*) *is the vertical aggregation of risk component i in process level I, and rI*+1,*J*(*i*, *j*) *is the horizontal aggregation of risk component i in process level I. In this case, the traditional risk priority number indicates the process risk in process level I* + 1 *for an arbitrary risk factor j.*

It should be noted that the RLM extends traditional risk-evaluation techniques, such as the FMEA method, to model all levels of process and risk components as one matrix. The RLM allows different kinds of aggregation functions; however, to compare the risk

values in different aggregation levels, aggregated values should be used to normalize the values to the same scale as the risk values. The FMEA approach considers only two levels, and only risk components can be aggregated (i.e., multiplied) into an RPN. Hierarchical frameworks, such as the total risk evaluation framework (TREF), consider risk components in multiple aggregation levels.

**Example 2.** *The TREF approach considers mJ* ∈ {2, 3, 4, 5, 6}*, vi*, *wj* <sup>∈</sup> <sup>R</sup>+*, rI*,*J*(*i*, *<sup>j</sup>*) ∈ {1, 2, ... , <sup>10</sup>}*,* <sup>∑</sup>*<sup>n</sup> <sup>i</sup>*:=<sup>1</sup> *vi* = 1*, and i* := 1, . . . , *n and uses four types of functions:*


*In the case of* <sup>∀</sup>*i*, *<sup>j</sup>*, *vi* <sup>=</sup> 1/*nI, the aggregation functions <sup>S</sup>*(1) · , *<sup>S</sup>*(3) · *and <sup>S</sup>*(4) · *produce the unweighted geometric mean, unweighted median and unweighted radial distance of the risk components.*

The TREF approach considers more than three risk components and multiple aggregation functions. However, the RLM can be applied to extend the TREF because the RLM specifies aggregations for both risk components and process levels.

**Definition 3.** *Let* **<sup>R</sup>***I*,*<sup>J</sup> be a risk-level matrix. Denote* **<sup>T</sup>***I*,*<sup>J</sup>* <sup>∈</sup> <sup>R</sup>(*nI*×*mJ*) <sup>+</sup> *as a threshold-level matrix. A risk event occurs in process i of risk factor j if* **R***I*,*J*(*i*, *j*) ≥ **T***I*,*J*(*i*, *j*)*. Formally, the risk event matrix (REM) is* **<sup>E</sup>***I*,*<sup>J</sup>* ∈ {0, 1}(*nI*×*mJ*) *, with*

$$\mathbf{e}\_{I,I}(i,j) = \begin{cases} 1, \mathbf{e}\_{I,I}(i,j) \ge \mathbf{t}\_{I,I}(i,j) \\ 0, \mathbf{e}\_{I,I}(i,j) < \mathbf{t}\_{I,I}(i,j) \end{cases}.\tag{5}$$

*A corrective/preventive task should be prescribed if* <sup>∑</sup>*<sup>i</sup>* <sup>∑</sup>*<sup>j</sup> eI*,*J*(*i*, *<sup>j</sup>*) <sup>≥</sup> *<sup>μ</sup>I*,*J, where <sup>μ</sup>I*,*<sup>J</sup>* <sup>∈</sup> <sup>Z</sup>*, with I* = 1, 2, . . . , *N and J* = 1, 2, . . . , *M.*

**Remark 1.** *Threshold values can be arbitrary positive values; however, they should be specified within a specified quantile of risk values.*

**Definition 4.** *Denote aI*,*J*(*i*, *j*) ∈ A *as the i*, *j cell of the corrective/preventive task at process level I and factor level J, where* A *is the set of corrective/preventive tasks. Each aI*,*J*(*i*, *j*) ∈ A *specifies a quadruplet: aI*,*J*(*i*, *j*)=(*pI*,*J*(*i*, *j*), *tI*,*J*(*i*, *j*), *cI*,*J*(*i*, *j*), R*I*,*J*(*i*, *j*))*, where* 0 ≤ *pI*,*J*(*i*, *j*) ≤ 1 *is the relative priority of the corrective/preventive task (e.g., if and only if the impacts of the risk events should be mitigated: pI*,*J*(*i*, *j*) ← *eI*,*J*(*i*, *j*)*), where t*, *c*, R *denote the time (t), cost (c), and resource (*R*) demands, respectively.*

#### **Example 3.**


*the threshold of the next factor level. Formally, T*1,2(., .) = *t*1,2*. A warning is generated if either a risk-component value or the aggregated value is greater than the threshold. In addition, the TREF method allows warnings to be generated manually due to a seventh factor, namely, the criticality factor, where a value of* 1 *indicates that the process is critical process that must be corrected regardless of the risk value.*

*Due to the column-specific thresholds, different corrective/preventive actions can be specified to mitigate each risk component and its aggregations. Nevertheless, in this case, common corrective/preventive actions are specified to mitigate the risk components.*


Theoretically, the FMEA and TREF methods can both be used in different process levels; however, neither of these methods aggregate the risk values of the processes. The vertical aggregation, which is performed by all risk-assessment techniques, indicates which processes must be corrected. In addition, if the TREF method is followed, corrective/preventive tasks can be specified to decrease the risk-component value. In other words, different corrective/preventive tasks can be specified to decrease the severity or occurrence of a process risk. However, no existing methods provide the general severity or occurrence of the processes performed by a company. The proposed RLM and REM allow us to specify:


These thresholds can be specific for all factor and process levels. The vertical aggregation result indicates the aggregated value of the risk component. The horizontal aggregation result indicates the aggregated value of the process risks.

Traditional methods, the new FMEA approach, and the TREF method can all be modeled by the ELM. In addition, the ELM allows a company to determine specific thresholds and corrective/preventive actions for each risk value and risk event. Corrective/preventive actions can be prioritized, allowing sets of different activities to be incorporated into existing processes. Another advantage of the ELM is that all risk levels are included in the same matrix; therefore, complex improvement projects or processes can be specified to simultaneously mitigate risks at all levels.

#### 2.1.2. Specific Processes

An improvement process is a set of corrective/preventive tasks. This study focuses on the first phase of developing an improvement process, namely, process screening. In this phase, the set of tasks in the improvement process with the greatest impact on risk mitigation is specified. In the proposed algorithm, we have the following steps.

**step 1** The risk priorities of all corrective/preventive tasks are specified.


In our study, multilevel matrix representations and data-mining techniques, such as seriation and biclustering, are integrated into screening and scheduling algorithms to determine the set of corrective/preventive tasks that mitigate enterprise risks at all aggregation levels. Although these algorithms performed well in general cases, this is the first study that attempts to combine these techniques to improve the whole risk-assessment process.

#### **Step 1—Specification of the task priority matrix**

**Definition 5.** *Let* **P** = **P***I*,*<sup>J</sup>* ∈ [0, 1] *nI*,*mI* , *I* = 1, 2, ... , *N*, *J* = 1, 2, ... , *M be a (task) priority matrix. Depending on the decision, pI*,*J*(*i*, *j*) *is either pI*,*J*(*i*, *j*) = *eI*,*J*(*i*, *j*)*,* or

$$p\_{I,J}(i,j) = \begin{cases} 1 & \text{, if } r\_{I,J}(i,j) > t\_{I,J}(i,j) \\ (t\_{I,J}(i,j) - r\_{I,J}(i,j)) / r\_{I,J}^{\text{max}} & \text{, otherwise} \end{cases}$$

*where r*max *<sup>I</sup>*,*<sup>J</sup> is the maximal possible risk value at aggregation level* (*I*, *J*)*.*

The task priority matrix **P** is either binarized or 0–1 normalized, with greater numbers indicating higher priority tasks at all aggregation levels. In step 2, seriation is applied, which uses combinatorial data analysis to find a linear arrangement of the objects in a set according to a loss function. The main goal of this process is to reveal the structural information [37].

#### **Step 2—Seriation of the task priority matrix**

In general, the goal of a seriation problem is to find a permutation function Ψ∗ that optimizes the value of a given loss function *L* in an *n* × *m* dissimilarity matrix **D**:

$$\Psi^\* = \arg\min\_{\Psi} L(\Psi(\mathbf{D})).\tag{6}$$

In this study, the loss function is the Euclidean distance between neighboring cells. Simultaneous row and column permutations to minimize a loss function is an NP-complete problem, which is directly traceable to a traveling salesman problem [37]; therefore, hierarchical clustering [38], which is a fast approximation method, is used to specify blocks of similar risky processes and risk components. Seriation identifies a set of risky processes and risk components; however, it does not delimit these blocks.

#### **Step 3—Specification of risky blocks in the task priority matrix**

**Definition 6.** *A block is a submatrix of the task priority matrix that specifies risky processes (as rows) and risk components (as columns) simultaneously. A selected block in which the median of the cell elements is significantly greater than both the nonselected processes and risk components represents a risky block.*

Risky blocks are identified with the iterative binary biclustering of gene sets (iB-BiG) [39] algorithm. This algorithm assumes that the utilized dataset is a binary dataset; if this assumption is not valid, the first step is to binarize the dataset based on a given threshold (*τ*). Because **E** is a binary matrix, if **P** = **E**, then **P** is also a binary matrix; otherwise, the threshold is based on the judgment of the decision makers.

The applied iBBiG algorithm balances the homogeneity (in this case, the entropy) of the selected submatrix with the size of the risky block. Formally, the iBBiG algorithm maximizes the following target function, with the binarized dataset of matrix **P** denoted as **B**,

$$\max \leftarrow score := (1 - H\_{\mathcal{B}})^{a} \begin{cases} \sum\_{i} \sum\_{j} [\mathcal{B}]\_{i,j} & \text{, if } tr(\mathcal{B}) > \tau \\ 0 & \text{, if } Med(\mathcal{B}) \le \tau \end{cases} \tag{7}$$

where *score* is the score value of the submatrix (bicluster, risky block) B ⊆ **B**. *H*<sup>B</sup> is the entropy of submatrix B, *tr* = *Med*(B) is the median of bicluster B, *α* ∈ [0, 1] is the exponent, and *τ* is the threshold. If *τ* or *α* increases, we obtain a smaller but more homogeneous submatrix. Previous studies [39] have suggested that the balance exponent (*α*) should be set to 0.3.

Risky blocks may overlap. However, based on the score value of the risky blocks, they must be ordered.

#### **Step 4—Specification of corrective/preventive processes**

The risky blocks specify the set of risky processes and risk components that must be mitigated simultaneously across all aggregation levels, as well as the set of corrective/preventive tasks in the activity-level matrix.

If there is more than one risky block, the scores of the risky blocks can be ranked. If the set of corrective/preventive tasks and their demands are specified, the task order is a scheduling problem that can be solved with the method described in [40].

Step 1 ensures that risks are addressed at all aggregation levels. Step 2 identifies risky blocks, and step 3 specifies the set of risky processes and risk components in all aggregation levels. Finally, step 4 specifies the set of processes, and the process proposed in [40] is used to schedule these processes according to time, cost, and resource constraints.

#### *2.2. Requirements of the Aggregation Functions*

To evaluate and assess risks at all aggregation levels, appropriate aggregation functions must be selected. We limit our analysis to scalar aggregation values. Several content and mathematical requirements can be set for different aggregation functions.


Next, we formulate the mathematical requirements. The mathematical requirements guarantee a lack of distortion.


The above requirements appear to be logical; however, the requirements are difficult to satisfy, and it is not certain that these requirements are adequate, contrary to the literature. For example, in the case of additive or multiplicative models, the values near the mean appear more frequently because these values originate from not only medium-medium risk value combinations but also small–large and large–small risk value combinations.


#### *2.3. Characterization of Potential Aggregation Functions*

In practice, the characteristics of the applied aggregation function must be considered when determining *wi*. For example, how the applied aggregation function handles distribution asymmetry and component outliers must be considered. The properties of some aggregation functions were described by [48].

A preliminary evaluation of various aggregation functions is included in Table 2. We assume that the components have a scale of [1, 10] and that the number of components is *n*.


**Table 2.** Characterization of risk aggregation functions.


**Table 2.** *Cont.*

#### **3. Practical Example**

Our example shows the risk-management system used by a real company. At the request of the company, we have changed some information.

#### *3.1. Research Plan*

The research objective was to test different aggregation functions in various aggregation situations. We evaluated functions that approximately satisfied the requirements discussed in Section 2.2. To select the aggregation functions, we considered the results of a previous study [13]. The basis of the examination is shown in Table 1. Due to the large number of possible cases, we analyzed only the cases shown in Table 3. The focus of each risk component is referred to as its "component"; at the lowest aggregation level, these components can be a part of a product or process.

At higher aggregation levels, the risk component is the result of lower-level aggregations, e.g., the RPN.


**Table 3.** Examination plan.


#### **Table 3.** *Cont.*

#### *3.2. Process Hierarchy*

To demonstrate the proposed matrix-based risk analysis, we use a three-level hierarchy. The detailed hierarchy is described below:

#### **4.** Production

	- **4.1.1.** Start processing order
	- **4.1.2.** Entry production control form
	- **4.5.1.** Product engineering
	- **4.5.2.** Product planning

#### **5.1.** Purchasing

	- **5.2.1.** Vehicle arrival
	- **5.2.2.** Unloading
	- **5.2.3.** Unwrapping, inspection.

In this example, each subprocess has 2–4 failure modes. At the lowest level, we used six risk components (namely, the occurrence (O), severity (S), detection (D), control (C), information (I), and range (R)) to describe the risk.

*3.3. Results of the Matrix-Based Risk Assessment*

3.3.1. Bidirectional Aggregation

The results obtained at the lowest level are shown in Figure 1.

**Figure 1.** Results of bidirectional aggregation at the lowest level.

In Figure 1, the aggregation directions are indicated by the arrows. In one case, we first performed horizontal aggregation (1a). This approach is consistent with common practice: the RPN is typically calculated as a product function by using risk components such as the occurrence and severity. These RPNs can be aggregated further (1b). The other case is the opposite scenario. First, we aggregated the same risk components for different subprocesses (2a); then, the resulting indicators were aggregated by using different functions (2b). There are two interesting ways to view the results:

1. Determining which functions should be used in different aggregation situations; and 2. Comparing the results of the two aggregation directions.

Ad1. The aggregated values obtained from the same data by using different functions differ significantly. Due to the limited extent of this paper, it is not possible to interpret all the results. However, we discuss some important results. No linear results were obtained with the product and corrected product (interval [1/10*n*−1, 10]) functions. Based on preliminary theoretical considerations, it is still interesting to determine how the results deviate from the aggregated values. In this respect, the arithmetic, geometric mean, and median methods appear to perform better. However, because the risk components at this level differ, additive models (such as the sum, mean, and median approaches) cannot be applied. Thus, our recommendation is to use the geometric mean method. When aggregating values in the next levels, we work with homogeneous data; thus, the indicators provided by aggregation functions based on the additive model (such as the sum, mean, median, and frequency) can be interpreted.

Ad2. The values of the two aggregation direction were compared.

In Figure 1, we connected the corresponding data obtained with different aggregation directions. For example, the arithmetic mean is 1.96–1.96, the geometric mean is 1.86–1.8, and the median is 1.77–1.97. Surprisingly, the two aggregation directions led to nearly identical results. However, this finding cannot be generalized, as it depends on the data. The next level of aggregation is combining production and logistics. The aggregation results along the entire hierarchy are shown in Figure 2.

#### 3.3.2. Aggregating Warnings

Warnings can also be aggregated. We aggregated the warnings along the hierarchy, as shown in Figure 3.

**Figure 3.** Results of multilevel warning aggregation.

The function results can be summarized as follows:

One issue with the product function is apparent: strong bias. As a result, warnings may result in Type I or Type II errors. Normalization of the product to the interval [1/10*n*−1, 10] is not a good solution because this distortion remains. Although 10, as the largest scale value, is psychologically advantageous for judging the risk, in practice, small aggregated risk values are generated, even if there are only a few small values among the component risks. This result can be observed in the prod/10*n*−<sup>1</sup> lines in Figure 1. These low values lead to cumulative bias during further aggregations. Thus, for expected value-type aggregated risk values or heterogeneous components, we recommend the geometric mean or potentially the radial distance as opposed to the product. As a result of the above findings, horizontal aggregation is proposed for the lowest level, while vertical aggregation is proposed for higher levels. As it can be seen in the Figures 2 and 3, the multi-level aggregation can be implemented with risk values and warnings as well. Combining this with the two-way

(horizontal and vertical) aggregation directions offers a versatile, multipurpose application opportunity that cannot be found in the literature. A further option to use this hierarchical structure is to generate risk mitigation countermeasures.

#### 3.3.3. Generating Preventive Actions

Following the four steps of the proposed method (Section 2.1.2), first, the aggregated risk values were calculated by using the six risk components and the failure modes in the lowest evaluation level. Five aggregation methods, namely, (1) the (arithmetic) mean, (2) geometric mean, (3) median, (4) maximum, (5) and product normalized to the interval [1, 10] methods, were used to calculate the values of the rows (process components) and columns (risk components). The processes, subprocesses, and failure modes are highlighted in Figure 4. In addition, the background color of each cell indicates the risk level, with red cells indicating higher risk values and green cells indicating lower risk values.

The aggregated values are calculated in two ways, as shown in Figure 4. The left side of Figure 4 shows the first method, in which the risk values of the process components are aggregated first, whereas the right side of Figure 4 shows the opposite calculation method.


**Figure 4.** Risk-level matrix for production processes.

A comparison of the results shown in Figure 4 indicates that the different aggregation methods result in the same trends in the aggregated risk values. This finding was confirmed by the seriation results, in which the process and risk components were calculated at the same level, and the biclustering results, in which the sets of risk and process components were selected simultaneously. Therefore, only the first aggregation mode was considered.

To specify the set of risk/process components that must be mitigated, we use two methods. In the first approach, which is an unsupervised method, a predefined threshold matrix is not necessary. In this case, we want to identify the set of risk/process components and their aggregations that are greater than a specified quantile. In contrast, a threshold matrix is specified in the supervised risk evaluation method, with the risk event matrix specifying the risk values of the risk and process components to be mitigated. However, because the risk and process components have common corrective/preventive tasks, this set should also be collected by seriation and biclustering methods.

Figure 5 shows the seriation (step 3) and biclustering (step 4) results for two thresholds (*τ* = 0.5 (Med) and *τ* = 0.75 (Q1)).


**Figure 5.** The unsupervised risk evaluation results. The seriated and biclustered risk-level matrices with *τ* = 0.5 (Med) and *τ* = 0.75 (Q1) are shown.

Figure 5 identifies two overlapping *τ* = 0.5 (Med) biclusters and one overlapping *τ* = 0.75 (Q1) bicluster. Increasing the value of *τ* leads to smaller, cleaner biclusters. Because the risk/process components and their aggregations are both considered, the selected and omitted rows and columns must discussed.

The seriation and biclustering results indicate the set of risk and process components and their aggregations. The results show that the risk values in the production preparation process (4.5) and the risk components during the product engineering (4.5.1) and production engineering (4.5.2) processes should both be mitigated. However, the customer orders (4.1) and their subprocesses were not selected. Although both biclusters identified risk component information (I), neither specified the detection (D) value. The maximum aggregation metric, which identifies the riskiest process and risk components, is always applied to the bicluster; however, the production metric, which is used in the FMEA approach, is never applied. The results also show that if there are several risky processes in a higher aggregation level, the mean and median cannot be used to identify the risks to be mitigated.

Figure 6 shows specific thresholds for the risk components and their aggregations. A risk value should be mitigated (red background cells) if its value is greater than or equal to the threshold value. In this example, thresholds are specified for the risk components and their aggregations; however, thresholds are not specified for the process components and their aggregations. Therefore, common thresholds are assumed for all kinds of processes.


**Figure 6.** The supervised risk evaluation results. The seriated and biclustered risk-level matrices for different risk events are shown.

Figure 6 shows the seriated and biclustered risk-level matrices for different risk events. In this case, two overlapping biclusters can be specified for both the *α* = 0.3 and *α* = 1.0 parameters that indicate the sets of risk components and their aggregations, as well as the sets of process components and their aggregations. If the risk-level matrix is seriated and biclustered according to the binary values of the risk event matrix, the set of specified risk/process components is similar to the set generated by the unsupervised risk evaluation method (see Figure 5). Additionally, in this case, two overlapping biclusters can be identified. However, the Q1 and Med biclusters are identical. In this case, the purity can be increased by increasing the value of the *α* parameter. Regardless of whether the threshold matrix is included or excluded, the identified risk values that should be mitigated specify the set of corrective/preventive improvement tasks (see Figure 7). Figure 7 shows part of the matrix of corrective/preventive actions. Five tasks, namely, (1) feedback on customer communication, (2) feedback on internal communication, (3) meeting deadlines and faster recognition, (4) more frequent updates, and (5) improving forecasts, are considered in the failure mode level, whereas the maintaining requirements and increasing discipline, training, and bonuses tasks are considered in the aggregated levels. It is important to note that corrective/preventive actions do not need to be specified for all cells. Because the maximal values are corrected if and only if one of the risk/process components must be corrected, corrective/preventive actions should be specified only for the risk/process components.

Figure 7 shows the selected cells for parameters *α* = 0.3 and *α* = 1.0.

**Figure 7.** The matrix of corrective/preventive actions for *α* = 0.3 and *α* = 1.0.

In this practical example, both selections required aggregated corrective/preventive tasks, such as *maintaining requirements and increasing discipline, training, and bonuses*. This result indicates that not only should failures be corrected or prevented but also that these failures should be prevented at higher risk and process levels.

#### **4. Summary and Conclusions**

A real-world example is used to demonstrate the proposed novel multilevel matrixbased risk assessment method for mitigating risk. The paper contributes three key findings to the literature. (*C*1) The proposed set of multilevel matrices, known as the enterpriselevel matrix (ELM), supports the whole risk assessment process, including identifying the risks (e.g., the RLM), evaluating the risks (e.g., the TLM), and determining the corrective/preventive actions for risk mitigation (e.g., the ALM). (*C*2) The multilevel matrix structure allows decision makers to address the process and risk components and their multipurpose aggregations in the same matrix. As a result, the process components, all levels of the process and risk components, the aggregated risk values and the risk areas in

all levels of the enterprise can be evaluated simultaneously. The proposed matrix-based method does not limit the number of risk components or the number of levels in the aggregation hierarchy. In addition, to the best of our knowledge, this is the first method that aggregates both the risk and process components to evaluate risks at different process levels. (*C*3) By employing seriation and biclustering methods, the risk-level and threshold-level matrices can both be reordered to identify warnings or risks for the process and risk components simultaneously. If more than one aggregation method is employed to aggregate the risk/process components, the employed data mining method, namely, the biclustering and seriation method, selects the appropriate aggregation functions, which indicate the risks in higher process and risk aggregation levels. The employed data-mining method specifies multilevel submatrices that identify the process components, processes, process areas, risk components and risk areas simultaneously. According to the proposed multilevel submatrices, including the RLM and TLM, the appropriate corrective/preventive actions can be proposed based on the ALM matrix to mitigate risks at different levels.

In this work, we ignored the case where there is a dependency between risk components. This is a limitation compared to real cases and opens research opportunities in the future. In the practical example, we omitted the weighting of the risks. However, this limitation can be easily solved by using formulas containing weights. A practical implementation limitation is that the choice between two types of aggregation direction and several functions is a time-consuming process.

**Author Contributions:** Conceptualization, Z.K.; Methodology, Z.T.K.; Validation, T.C. and I.M.; Writing–original draft, Z.K. and Z.T.K. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work has been implemented by the TKP2021-NVA-10 project with the support provided by the Ministry of Culture and Innovation of Hungary from the National Research, Development and Innovation Fund, financed under the 2021 Thematic Excellence Programme funding scheme.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Nomenclature**



#### **References**

